IT Consultation

This paper covers the number of IT issues within a start-up company called Fast-Paced Financial. All scenarios are supposed to follow the company’s decision to deploy the Windows Server 2008 network.

1. Data loss prevention and network downtime elimination are main concerns of the business’ owner. In this regard, the solution offered by the Windows Server 2008 network is highly beneficial. The network infrastructure under the Active Directory Domain Services (AD DS) is organized in strictly hierarchical way, providing the high level of reliability and security to the business. Active Directory runs the set of services on domain controllers (DCs), handling network objects in accordance with specific policies. Objects, such as sites, organizational units, domains, trees, and forests are the core of the AD. Domain services manage those objects by means of group policies created for each particular type of activity.

All users’ data can be automatically replicated on the Windows 2008 Server. As user logs into the AD, all data changes that was made while offline are immediately replicated. The process repeats itself regularly, through customizable periods of time. The group policy, which is responsible for replication, constantly monitors the users’ activities and parses changes. There is a risk of losing the data when user works offline. Any newly created document in this period can be lost for a number of reasons. Thus, users must be warned about such a possibility and encouraged to make backup copies themselves (Richards, 2008).

Chances for the network downtime are virtually non-existent with the Windows Server 2008 platform. As long as the network infrastructure comprises at least two DCs, the reliability is sufficiently high. However, some services may be affected if the server handling Single Master Operations Roles (FSMO) fails. This means that servers must be substantially protected against the power outages and employ the disks’ redundancy configurations such as RAID (Redundant Array of Independent Disks). Regular backups should also be made a part of the corporate IT policy.

2. Fast-Paced Financial (FPF) is intended to use as its DNS name. FPF has its main office in New York City and remote satellite offices in Houston, Indianapolis, and Los Angeles. FPF wants to build the forest configuration with a separate child domain representing each office. The solution for such a scenario is shown on Diagram 1:

The main DNS server is located in New York office, managing the namespace of entire forest.  In accordance with company’s policy, subordinated DNS servers reside in remote offices. It is proposed to use as a domain name for the Houston office, for Indianapolis and for Los Angeles. The domain name structure could be flat, meaning that the name would suffice for all objects in the IT infrastructure of Fast-Paced Financial. The managerial decision, however, requires the third-level naming. Such approach simplifies the distinction between geographically distributed regional offices; for instance, it can be is most obviously useful with email addresses. The domain name FPF is registered under the .com TLD (Top Level Domain). It is impractical to register domain names for regional offices with domain registrar authorities. Domain resources under the third-level identifier can be resolved by secondary DNS servers in each satellite office of Fast-Paced Financial. The primary DNS server located in New York will handle all requests between regional offices and from outside the company.

As shown on the Diagram 1, each geographic location is represented in Active Directory as a domain. Domains are grouped into trees. A tree is a structure comprising one or more domains or domain trees within trust hierarchy, providing a contiguous namespace. The number of domain trees that have a common directory schema, global catalog, logical structure, and directory configuration is composed into the forest. The forest determines the boundary of organization’s IT infrastructure, within which computers, users, groups, and other network objects can be accessible.

3. As FPF’s network is evolving, some scalability issues arise. In this scenario, the server responsible for one of FSMO roles (Flexible Single Master Operations) has failed. As the role was PDC Emulator, users are affected.

PDC, which stands for the Primary Domain Controller, is an obsolete function used to be responsible for NT4 prototype of the whole AD (Clines et. al, 2009). However, PDC Emulator service is important part of domain’s operations. It manages the time synchronization throughout the whole Windows domain, GPOs (Group Policy Objects), and password changes. The time synchronization is of a particular interest with regard to the impact on domain accessibility. As users’ authentication is based upon the Kerberos algorithm, it depends on the timestamp in the encrypted requests and replies. With PDC Emulator gone, the time on users’ PCs will differ from the time used by AD authentication service. That will result in the incorrect authentication. Soon after the PDC Emulator failure, users will not be able to log into the AD. In addition, no newly created GPO or password change will be replicated between domain controllers if PDC Emulator is not available.

In order to restore the PDC Emulator, server running the FSMO roles must be examined. In the easiest case, it would take only to restart the appropriate service, which might have failed for some reason. There is a possibility that the service would not restart or the PDC Emulator is not the only service that failed on that particular server. In this case, it would be prudent to reboot the server. Sometimes memory leaks can cause much more damage, so planned periodic reboots would make a good corporate policy.

The failure of other FSMO roles also could affect the company and users. For instance, the Infrastructure Master role that is responsible for the objects’ proper handling and replication is indispensable part of AD infrastructure. The Domain Naming Master role should be operational in order to make any changes to DNS namespace. In case of these roles failure same restoring measures should be applied as with the PDC Emulator breakdown.

4. There are new issues arising with FPF’s network. The sites in AD configuration are not fully routed, which means that connection objects are not configured for all DCs to ensure the full transitivity of replications. It can result in a situation when the Knowledge Consistency Checker (KCC) would be unable to create the map of necessary connections.

The KCC routine is invoked regularly to adjust the replication topology with changes that occur within the Active Directory. First of all, the KCC must be enabled on all sites in the forest, and the fully routed or fully meshed environment must be configured. During its next pass, the KCC will generate a new replication topology with new AD connection objects, defining which DCs will replicate among themselves.

Optionally, the bridgehead servers can be configured to facilitate the KCC operations. It must be done on at least one DC at each site, specifying the available links for the inter-site data transfer. After the designation of bridgeheads, new site links would identify the connecting hosts for KCC. That would ensure the increased level of replication reliability throughout the forest. Additionally, remote sites could have their own independent link to the Internet. Then it might be practical in some cases to use VPN in order to create as many links as it is necessary for the fully routed environment (Dean, 2010).

5. The first domain controller in FPF’s forest is, by default, a global catalog server. As recommended by the FPF’s new Network Administrator, additional global catalog server must be added to one of FPF’s remote sites since the site seems to have an unreliable WAN link.

The Network Administrator recommendation has its merits. It is always a good idea to install an additional global catalog server as the single one implies the risk of having the single point of failure. In addition, with the growth of FPF’s network utilization, having the only global catalog server can cause severe problems.

However, placing the additional global catalog server at a remote site will result in even more severe performance problems. Servers will create very intense replication traffic, which will not pass through the unreliable WAN link.

The global catalog server is responsible for very important functions inside the AD, performing a number of tasks for side applications (such as Exchange) as well as for Windows itself. Thus, the placement of the global catalog is a crucial part of the AD contingency approach. It is recommended to place the global catalog server at each site that has a fast connection (no less then T1) to the main domain controller. It is also recommended to place the forest role on one DC and the domain role on another, when having multiple global catalog servers at the headquarters.

It is important to remember that the global catalog services should not be enabled on the server that holds the Infrastructure Master FSMO role. Finally, it might be beneficial to upgrade that faulty WAN link anyway, as its poor quality affects all other aspects of AD usage at that remote site.

Order now

Related essays