Windows Server 2016 Administration: Modifications

Windows Server logoContinuing with a previous post on the upcoming Windows Server 2016, we look at administrative improvements in 2016. There are many articles about Windows Server 2016 that tell us about the new features we should expect, but this blog is about the modifications we should expect. We are talking about changes made to the features we already utilize in the Windows operating system. There are many changes coming and this article is not going to cover every one. It will focus on the significant changes that will affect a Windows Administrator’s day-to-day usage of the operating system, our most common tasks.

  • The Interface – The GUI will be similar to the GUI in Windows 10. The ‘Start’ button is back. That should make a lot of administrators happy being that the lack of a Start Menu was one of the top complaints with Windows Server 2012. However, we will also see a change in how we find the items we utilize. Navigation of menus and features will have some differences. For instance, certain settings may not be where you expect to find them in relation to Server 2012 and 2008.
  • Active Directory – Windows 2003 functional levels will be deprecated in this release. If your Active Directory is still at a Windows 2003 functional level or you are still utilizing File Replication Services, it is time to enact a plan to upgrade the domains functional level and move on from FRS. Enhanced security features and certificate services will improve compliance.
  • PowerShell – Everything we do in the Windows 2016 GUI can be done in PowerShell because everything done in the GUI is controlled through PowerShell. However, the reverse is not true. There are tasks you will need PowerShell commands to accomplish because there is no GUI for the task. PowerShell 5.0 will be expanding the language, commands, and feature-set to support the modified and new features in Server 2016. This article is focusing on the administration side, but we have to note that there will be many modifications/changes on the developer side as well like using classes to develop.
  • Windows PowerShell Console – For years now, we have been working with PowerShell, but our primary console to perform the work within is rudimentary. Many of the features people have been looking for in a language editor are being incorporated into the updated PowerShell Console. Features like drag-and-drop, cut-and-paste, and more.
  • Storage – While there are new features for file servers and storage clusters, the most significant update to an existing feature affects data deduplication. Optimizations in the handling of large files and large volumes will give improved access and control. Clusters will be able to run in a mixed Server 2012 and Server 2016 mode. Sever manager will be able to control deduplication of backup workloads.
  • Hyper-V – One of the big issues with Hyper-V is that it is not as feature rich as its competitors. Windows Server 2016 hopes to close that gap. Features for handling server upgrades, modifying resources to VMs while active, device access, and more were integrated to close the feature gap. The 2016 Hyper-V Manager is backward compatible so you can manage 2012, 2008, and Windows 8 VMs. Hyper-V Manager no longer has to use the security of the account logged in. You can now access Hyper-V with an account other than the one you are logged in as. Improvements in the handling of server hardware resources give virtual machines improved performance. Even the upgrade process for a Hyper-V cluster has been improved.
  • Remote Desktop Services – The most significant modifications to RDS are the updated clients and browser support. For instance, Edge is fully supported and there will be new Windows 10 and Mac apps available. Device support has been enhanced to include Pen devices. Support for OpenGL applications is also included. New features will enhance the offerings we will be able to give our users like Personal Session desktops.

These are some of the major modifications in 2016 that will affect an administrator. There will be many modifications in Windows 2016. More than what can be discussed here. Hopefully the few changes listed above will prompt administrators to take a look at what is coming and how it could affect their environment. While discussing the modifications to administration from 2016, it is hard not to mention new features. There are many new features are going to affect your role as an administrator. To see more of what is new and changing in Windows Server 2016, check out the Microsoft blogs

Feel free to post any questions or comments below or reach me directly by email.

 

AZS-3

 

Craig R. Kalty (CCIA, CCEE, CCA, MCITP:EA, MCITP:SA, VCP)| Sr. Network Consultant craig.kalty@customsystems.com

 

 

©2016 Custom Systems Corporation

Windows Server 2003 Migration: Tasks Part 3 – Build and Test

windows server 2003 R2In Part 2, we created a plan that maps out the migration from Windows Server 2003. Now we are at the point where we need to build what we designed. Notice how in all the blogs concerning decommissioning 2003 that I use the words ‘migrate’ and ‘migration’ and not upgrade? I probably should have discussed this sooner, but there is no upgrade. You cannot upgrade 32-bit Windows 2003 to 64-bit 2008 R2 or 2012 R2. No matter your plan and budget, you will need to perform a fresh install on at least one server to start the process. Also, it would be wisest to go to 2012 R2 for many reasons (particularly not having to repeat this process when 2008 reaches end-of-life). For some migration paths, you may need to install at least one 2008 server to go from 2003 to 2008 and then to 2012.

The best place to start would be a test/development environment. We know from experience that there are many smaller shops out there that do not have the budget to create a development environment. Most of them are going to rely on the expertise of their staff or outside services to get their environment from where it is now directly to an updated infrastructure without performing a lot of tests. For those environments, remember to at least do extensive planning and research beforehand to mitigate issues. For those that can build a development environment, the best way to do it is virtualization (there I go again using that word). Remember that you can make a virtual server host out of various hardware platforms. You can even install a robust hypervisor for free. To give you an example, my laptop has an extra drive that I swap instead of the DVD drive. I then manually boot to the extra hard drive where I have XenServer hosting over a dozen VMs. Is it powerful? Not really, but I can run my demo environment from it. The point is we don’t need to break the budget to make a development environment. We may not even need to touch any of the budget. If you did budget for a new virtual environment or to extend an existing one, here is where you can start utilizing that new investment. P2V (physical to Virtual) machine images of your existing infrastructure servers. From there, you can fire up a new virtual machines (VMs) housing 2012 R2 and/or 2008 R2. Once you have the test environment, take snapshots of all the VMs before making any changes. Now you can begin the process of converting your virtual infrastructure in a development environment. If you run into issues, you can utilize the snapshots to reset the environment and try again. Take detailed notes of all the steps and pay attention to any potential problems. Once you have a clear plan with detailed notes, you are less likely to run into the unexpected when updating your production environment.

So, what exactly are we testing in our development environment? There are basic services that almost every shop is going to be utilizing. Active Directory, DNS, and DHCP are the three most common services we will need to migrate to another server. The good news is that detailed directions from Microsoft and other experts can easily be found on the web. Some organizations are going to have the basics and some are going to have more services in use. For instance, some organizations may utilize Terminal Services. Migrating that to Remote Desktop Services (RDS) will be a project in itself (though a worthwhile one).

Here is an example list of services you may/will need to test:

  • Basic services:
    • Active Directory (AD)
    • Group Policy
    • Domain Naming Systems (DNS)
    • Dynamic Host Configuration Protocol (DHCP)
  • Extended services:
    • Certificate Services and Public Key Infrastructure (PKI)
    • Terminal Services
    • Distributed File Services (DFS)
    • Internet Information Services (IIS)
    • Network Load Balancing (NLB)

Each organization is different, so they may have some or all of the items from the above list. A lot of organizations will have more to add to the list. Aside from these services that come in a Windows server, we will need to test hosted applications. This set of blogs has been pretty much focused on the Active directory side of the migration, but what about applications? If you have Exchange, SQL, or another enterprise application hosted on a 2003 server, you are going to need a separate project just to migrate those applications. This may be the opportunity to move from in-house mail services to a cloud-hosted solution like Office 365. It is possible to focus on upgrading our Active Directory infrastructure first and saving the applications hosted on 2003 servers for a later project. However, research the applications to make sure they will still function in an updated AD infrastructure. If not, that is one of those symmetrical projects you will need to have in your plan.

The next step will be implementation into production. At this point, we are ready. We have performed tests in our development environment, gained experience in the tasks, created detailed instruction sets, and realized modifications needed in our plan.

As always, I welcome your comments or questions. Please feel free to leave them below or email me directly. Also, be sure to bookmark our site for more information from Microsoft. Also, please be sure to register for our live, Microsoft event – Windows Server 2003:  Security Risk and Remediation on February 18.

AZS-3

 

 

Craig R. Kalty (CCIA, CCEE, CCA, MCITP:EA, MCITP:SA, VCP)|
Sr. Network Consultant
craig.kalty@customsystems.com

 

 

 

©2015 Custom Systems Corporation

Windows Server 2003 Migration: Tasks Part 2 – Planning

Dusit PLANNINGIn Part 1, we discussed taking inventory. Now that we have our inventory, it is time to plan our Windows Server 2003 migration. The planning phase is very critical. This is where you are going to make major decisions that will affect your infrastructure going forward. We could do a straightforward migration going server to server or we could use this opportunity to make needed or desired changes to the infrastructure. Moving from Windows Server 2003 will make news technologies and advancements available. You do not have to create a plan from scratch. There are a number of tools we can use. For instance, Microsoft has the Assessment and Planning Toolkit. The web is full of helpful tools that you will have to log onto. Also, open your favorite browser and search engine to find what you need (say, a blog like this).

For our planning phase, we need to factor in the following:

  • Inventory – We took an inventory of our infrastructure in Part 1. We now need to assess that inventory.
  • Budget – How much we can spend determines what we can do to perform the migration. If we can afford to put up a new virtual environment or build on an existing virtual environment, then we virtualize. If we can only afford to utilize the hardware we have, then this is a rolling migration.
  • Man-power – The experienced resources available to work on this project and how much of their time they can allocate/dedicate.
  • Timeline – Given how long we have to complete this task, we can determine what we can accomplish.
  • Infrastructure and environment – The condition of our infrastructure and the environmental resources may require modifications to accomplish this project. If you are adding new servers, you may need new racks, more space, more cooling, and more power.
  • Symmetrical Projects – Those projects we can get done in conjunction with this project (i.e. virtualization) or those projects going on that could be impacted by our migration project.

Given the factors above, here are the general tasks in creating your plan:

  • Determine the project manager. Make sure you have someone who is dedicated to seeing this project through and can coordinate the plan.
  • Put the inventory into categories that make sense to your team and then prioritize the categories. The inventory facilitates our assessment of what we have.
  • Given the assessment and factors listed above, we can determine our needs, tasks, and what we can feasibly accomplish.
  • Determine the goals:
    • Are we migrating everything?
    • What are we migrating within our deadline? What can be migrated at a later time?
    • Are we trying to migrate and utilize the existing hardware or are we moving to new hardware?
    • Where will the supported applications reside?
      • New physical server?
      • Virtualization?
      • Moving to cloud services?
  • Determine your timing – Take your priorities and determine what can be accomplished within the allotted time frame and what needs to be done at a later time.
  • Map out the target destinations for the items you are migrating (i.e.virtualize, move to the cloud, upgrade, decommission, etc.)
  • Given the priority and desired outcome, determine how much time each task needs. Where possible, leave time for the unexpected. Try not to make your timeframes to tight. (Don’t read this out loud: If you want, use the Engineer Scott method. Give a time frame you know you can beat so you look good.)
  • Map your resources to your task. Determine the best available candidate(s) for each task.
  • Develop your timeline. Take the information and priorities you have gathered and map the tasks with a start date and time, duration, and an expected end date.
    • Include allowances for acquiring new resources. For instance, that SAN you ordered for the virtual environment may take three weeks to deliver. Use other tasks to fill the gaps where you are on hold. Even if they are a lower priority.
    • Include research time for each task. It does not hurt to admit that there are tasks your people are not sure how to accomplish. I would not want someone to try and migrate my DHCP from server 2003 to server 2012 without having experience or at least proper instructions. Within the planning documentation, note where to find the instruction sets.
    • Don’t oversaturate a resource. We all have environments where certain people will wind up being the subject matter expert for multiple tasks.
  • Document everything.

I know that I have greatly simplified the information listed above. The goals of the blogs in this series are to get you thinking, give you a general outline, and help keep you moving. I cannot stress enough that you need to be very detailed with your resulting planning. Your plan is what will be visible to your team and your management. You will most likely be held to the plan you publish. Proper planning will take out guesswork, cut down on surprises, help you handle the unexpected, and keep things running smoother.

In the next blog for this topic, we will look into our next step: ‘Build and Test.’

As always, I welcome your comments or questions. Please feel free to leave them below or email me directly. Also, be sure to bookmark our site for more information from Microsoft.

AZS-3

 

 

Craig R. Kalty (CCIA, CCEE, CCA, MCITP:EA, MCITP:SA, VCP)|
Sr. Network Consultant
craig.kalty@customsystems.com

 

 

 

©2015 Custom Systems Corporation

Windows Server 2003 Migration: Tasks Part 1 – Inventory

Know your environment. The very first task you need to do in a Windows Server 2003 migration is to update your inventory on your infrastructure. This does not mean only your Windows server 2003, this means your entire infrastructure. Why? Because you need to know exactly what you have, if there are any pitfalls, and if there are any synergies you can take advantage of. Just because a resource is not a Windows Server 2003, it does not mean it is exempt from the effects of the migration. In fact, you may need to update other resources in order to function with the results of the migration. You need to account for the following:

  • The quantity of Windows Server 2003 you have and their functions. How many are domain controllers? How many are just member servers?
  • The resources that are not Windows Server 2003.
  • Of the documented resources, the quantity of them still in use. You would be surprised how many organizations have orphaned servers and resources still in their environment because no one knew it was safe to remove them.
  • The hardware those resources reside on. Is the hardware still viable for today’s workloads? Is the hardware worth supporting?
  • The software/applications residing on resources. We need to know who owns it, is it still used, the resources required to install and operate the software, and if the software can be migrated.
  • The business units who use the resources. Talk to the people to find out if they actually still need the resources. Find out if they have any projects or plans to upgrade their applications that will facilitate the migration from Windows Server 2003.
  • The other resources or clients that need to communicate with the Windows Server 2003. For example, do you have a database or share on Windows Server 2003 that other servers are accessing?
  • The servers housing applications that can’t be migrated. Legacy software is one of the primary reasons we still have older servers with older operating systems. The software is still in use or is legally needed for archival purposes. There may be no upgrade path for the legacy system.
  • The people resources available. You will need to know if you have the staff with the needed experience and knowledge, the subject matter experts on the software applications, and the manpower-time needed for the project.

I won’t go into detail here on how to perform your inventory of your infrastructure. Various third-party vendors have products (inventory management systems) to help you. There are also tools on the Internet available to help with the task. Microsoft provides the Assessment and Planning Toolkit.

Once you have your inventory, you can start working on your plan. With the inventory and knowledge of the resources, you have the basis needed to determine priorities, tasks, resource assignment, scheduling and more. Now, we can move onto the planning: See our blog “Windows Server 2003 Migration: Tasks Part 2 – Planning” (available soon).

As always, I welcome your comments or questions. Please feel free to leave them below or email me directly.

AZS-3

 

 

Craig R. Kalty (CCIA, CCEE, CCA, MCITP:EA, MCITP:SA, VCP)|
Sr. Network Consultant
craig.kalty@customsystems.com

 

 

 

©2015 Custom Systems Corporation

Access Control and Authorization with Windows Server 2012

WindowsServer2012Sta_Web Have you ever needed to set up permissions on a network resource and the only way to satisfy the conditions for permission was to create a brand new security group?  Windows Server 2012 is the answer.

Let’s say you have a file share (network resource) that should only be accessed by people who are both managers and members of the HR group.  You have a Managers group and an HR group, but the requirements specify a mix of the two groups.  I have run across situations similar to this many times and I am betting many other domain administrators have run across this as well.

Prior to Windows 2012, we might need to create another group that contains users who satisfy both conditions.  This generates the need to administer another group.  Too many situations like this, and you have a huge list of groups to cover every condition.  The more groups you have, more the need to manually control group membership.  To get around this situation, we may manually set unique permissions directly on the network resource.  So now the administrator must update individual access directly on the resource instead of in a group membership.  Either way, we now have another individual point to administer and document.  The larger the organization, the more complicated this gets.

Best practice is to use groups for access control as opposed to using an individual account.  This makes things easier because all you need to do to give someone access permissions is to join them in a group.  However, what happens to administration when we create a group to cover many, many situations of multiple conditions?  Server 2012 has a new feature that alleviates this situation and empowers the administrator.  Dynamic Access Control is part of the advanced authorization and access control technologies.  Dynamic Access Control includes the following new functionalities:

  • Central Access Rules – the expression of authorization that includes one or more conditions.
  • Central Access Policies –used to bring together multiple rules of authorization to be applied across servers in a domain.
  • Claims – a unique identifier for user, device, and resource objects in a domain.  This identifier can be included in expressions.
  • Expressions – joins multiple conditions of authorization together to define access permissions.
  • Proposed Permissions – allows an administrator to predict the results of their conditional access expressions without actually applying the change.

Given the example above of HR Managers, we could go about setting up access permissions to the network share in a few new ways.  We could do it directly on the network share where we would create an expression that has the conditions of being a member of both the Managers group and the HR group.  Or, we could do it on the domain where we would create a central access rule that contains the defined conditions for group membership.  We would then include the rule in a central access policy that we could apply across multiple servers in our domain.  To test this, we could use proposed permissions to see how this new policy affects our resources without actually applying the change.  We could take this one step further by using claims.  We could create a claim on individual user accounts that gives them a unique identifier.  We could then use the unique identifier to make an expression that specifies the user is a member of the HR security group and has the associated claim to determine access permissions.  Think about how many groups and cases of unique permission administration we could eliminate.

In order to support Dynamic Access Control, a new Access Control List (ACL) editor has been included in Windows 2012.  The Enhanced ACL Editor allows you to incorporate the expressions created with the access control/permissions of the network resource.  This is the tool that allows you to create and bring together all the topics presented above.

Put a 2012 domain controller in a test environment and kick the tires of this concept.  Afraid of what you might break?  That’s what we’re here for. Call or click today for your free, network assessment.

AZS-3

 

Craig R. Kalty (CCIA, CCEE, CCA, MCITP:EA, MCITP:SA, VCP)
Sr. Network Consultant
Craig.Kalty@CustomSystemsCorp.com
© 2014 Custom Systems Corporation

Time for Windows Server 2003 End-Of-Life Plan

Windows 2003In previous posts, we’ve described the necessity to upgrade your Windows XP PCs to either Windows 7 or Windows 8.  Today, we are going to discuss the server side of the house.

Microsoft will stop supporting Server 2003 R2 on July 14, 2015.  I know a year can sound far away and over the horizon, but it isn’t – especially when it comes to servers.  A migration from one server to another can either take a few days or several weeks – depending on your infrastructure.   For example, migrating a file server from 2003 to 2008 is fairly straight forward – especially with the help of  Backup/Restore software like Backup Exec.  Backup Exec remembers things like file permissions, so we can backup your data from your old server, and then restore it to the new server.

If you have shared printers on your network, this part of the migration can be a bit more involved.  Not every printer manufacturer will support installing their printers in a 2008 64-bit environment – but we would investigate this for you before we begin the migration.  If your printers are not supported on a 2008 server, it may be time to upgrade those as well.

Support for Exchange 2003 server ended back in January 2008.  Exchange 2007 ‘mainstream support’ ended  April 2012, with extended support ending April 2017.  If you are still using Exchange 2003 or 2007, you should move to a new server immediately.  Custom Systems has done several migrations from 2003 to 2007, and up to Exchange 2010, so we have a clear path to follow.  We have also migrated a few clients from an on-site Exchange Server to Office 365 hosted email, depending on client need.

If you are using your servers to host applications, like Quickbooks or other third-party vendors, a migration from your old server to a new server gets more complicated.  We may need to get the software vendor involved in the process.  Make sure you have access to the latest version of your Applications before trying to move to a new server.  In some cases, we may even need to migrate to a new software product if the older product is no longer supported.

As always, and we would be happy to provide you with a free Network Assessment. Call or click today!

 

AZS-4Chase Reitter
Network Consultant
Chase.Reitter@CustomSystemsCorp.com

 

Things To Do With An Old Server

This Old Server

Today, we’re going to discuss things you can do with your old server hardware.  With everything going Virtual or Hosted now, sometimes you’re left with an old server that you don’t know what to do with.  Besides the obvious, (boat anchor, paper weight, etc.) we can still put that ol’ reliable server to good use.  Let’s assume that the warranty on your old server is out-of-date, and you have already moved all of your production services to either new supported hardware, or to a hosted service like Office 365.  As an example, our in-house Exchange Email server was migrated to Office 365 several months ago, and a SQL service we were providing has also been moved off-site.  That leaves us with two perfectly good (although old and no longer covered by manufacture warranty) servers.  One of these servers has plenty of disk space, but not a lot of memory.  The other has lots of memory, but not a lot of disk space.  This gave me an idea:  Use an iSCSI connection between the two servers, and setup a development environment.

Making the Old New Again

Internet Small Computer System Interface (iSCSI) has been around and in use for about a decade, but vast improvements have been made with Windows Server 2012.  Before today, you had to use either the Microsoft iSCSI add-ons, 3rd party tools, and they were more difficult to manage.  Now you can use the iSCSI tools right from the Windows Server 2012 management console.  But not only are design and setup easier; With higher performance network equipment, iSCSI connections are more reliable, and much faster than they used to be.  But you don’t need to go out and buy fiber optic cards.  Gigabit Ethernet cards can be found in just about any server built in the last five years, and are easy to find.  While I’d like to go out and buy fiber optic cards, this is only for development purposes, and I set a goal in the beginning of this experiment to only use equipment that I already had.  Both of my test servers have dual gigabit cards (two ports each), and will be plenty fast enough.

We have ways of Making You Talk

There are two simple ways to setup your physical iSCSI connection: Use a switch that supports VLANS, or just use an 8wire cross-over cable.  Many Cisco routers include a cross-over cable, so I have a few.  Just make sure that they are 8wire – many cross-over cables only have 4 wires to simply cross the transmit and receive signals – but these can only handle 100mb – and we’re going for the full gigabit here.

After installing Windows 2012 on both servers, I assign a static IP address to both primary NIC cards that resides on my primary subnet (192.168.1.x).  This is for server management purposes, and to connect to the rest of my network.  Then I assign a static IP to the secondary NIC cards that do NOT reside on my primary network, for example 10.0.0.x.  This will keep the iSCSI traffic off of my primary network equipment, and make the traffic between the two iSCSI servers MUCH faster.

Next, I use the Windows 2012 Server tools to setup my primary iSCSI management server (DEVHOST1) and my secondary iSCSI storage server (STORAGE1).  From the Windows Server 2012 management tools,   we assign all of the available disk space on STORAGE1 as a LUN to store our Virtual hard drives, which will be managed by the DEVHOST1 server.

Here’s what it looks like:

old server 1

By keeping the iSCSI network traffic on its own subnet, either on a separate switch or by using a cross-over cable, we improve the performance of both.

I can now install Microsoft Hyper-V on the DEVHOST1 server.  I can then build Virtual Servers with their large files located on the STORAGE1/LUN1 server.

This setup was for a Development environment.  I will be using it to test an Exchange 2013 server and a test SQL 2014 server.  In a production environment, I would be using new and supported hardware.

Custom Systems offers a wide range of new and supported solutions for your production storage and network performance needs.  To find out more, contact us today!

 

old server 3             old server 2

 

 

AZS-4Chase Reitter
Network Consultant
Chase.Reitter@CustomSystemsCorp.com

 

 

 

© Copyright 2014 Custom Systems Corporation

Ready For a Storage Area Network?

Servers and data storage are a mundane topic for most executives; especially when their primary focus should be on running a profitable business.  Storage Area Networks (SAN) have declined rapidly in price and are no longer a technology for the largest corporations.  Today’s small and medium businesses (SMB) can leverage this technology to create a more flexible computing environment and reduce server costs.

If you are still buying servers with a single purpose, such as — SQL, Exchange, and SharePoint — you are wasting money on hard drives and RAID arrays that can easily double or triple your server costs.  In comparison, an SMB SAN allows you to logically group the hard drives for all your servers into one or two devices that can be connected to all of your servers — providing higher disk performance, higher availability, and faster recovery time in the event of a catastrophic server failure.

By separating the storage from the traditional server (CPU, memory, and network adapters) you increase storage efficiency by only allocating the amount of storage currently required for the server and gain the ability to add storage on the fly.  For instance, if your SQL server demands more disk space – click to allocate it. Or, if your Exchange server has recently archived much of your old email to another system – reduce the amount of disk available to Exchange and increase its performance.

Many departments such as engineering, can go through periods of large storage growth when a new project or New Year approaches.  A SAN allows you to add additional drives on the fly and then allocate them to any server that requires it.  No more surprise IT requests to get a larger server because the current server is maxed out.

Server failure with local RAID arrays are a thing of the past.  When a SAN connected server fails, simply attach the storage on the SAN to a new or backup server and bring your business back online quickly.  No more waiting for tape libraries, and cloud based services to restore all the data to a new server, which can spell two to three days of down time.

What does it cost? That depends on your storage requirements and the number of servers you would like to connect.  A small SAN with two to three servers will start at about $10K.  Your mileage will vary depending on the size and number of the hard drives you will insert into the SAN.  Since all SANs are scalable, you can start with as little as six hard drives and grow to over 200 as your demands increase.  Why not start today?  Reduce your server costs, increase your flexibility, and get back to focusing on what is really import and grow YOUR business.

Paul R. CookPaul R. Cook
Vice President, Network Services
Paul.Cook@CustomSystemsCorp.com

 

 

 

© Copyright 2014 Custom Systems Corporation

Cloud-Based Apps vs Local Servers

I get a lot of questions about Cloud computing.  So today we are going to discuss a few of the differences between keeping your applications and files on local servers vs. moving to the Cloud.  We will cover some of the advantages and disadvantages of both, as well as examine my own bias.  We may even discover that I’m (GASP) wrong.  Sound like fun?  Ready?  Here we go!

What is Cloud?

Well, it’s not in the Stratosphere (though THAT would be especially cool!).  Cloud computing usually refers to a service that you pay to store data for you.  Everything from email, databases and files to accounting software can be Cloud based.  Advantage?  No servers to manage or  maintain.  No backups to check, no tapes to change.  Just sign the check on time, and it’s all taken care of for you.

This ain’t your Dad’s Cadillac, er, Cloud.

Cloud computing has been around since the dawn of the interwebs.  Why it’s just becoming a buzzword now is beyond me, but there it is.  Chances are, your bank hasn’t stored your account information in their local branch office in over a decade.  Instead, they pay a hosted service to provide the disk space and backups they need.  Banks used to dial into the data center at a specific interval each day, update any changes and check for problems.  It was painstakingly slow, but it kept your information safe.  Fast forward to today:  Even your grandmother is uploading pictures to Facebook or to DropBox.  Both are cloud.

So is Cloud better?

Well, it depends.  Internet services keep getting faster and more reliable.  So does server hardware.  Having servers in my office means that I get to manage them.  If there is ever a problem, it’s a short walk down the hallway, and I can troubleshoot in a matter of minutes.  Hardware can easily be replaced or upgraded as needed.  Servers have lights that blink, fans that whir, and hard drives that hum in perfect harmony.   And should one of them get out of tune, I can fix it.  If my data is in the Cloud, I have to rely on someone else to keep an eye on their servers.

In some scenarios, I suggest a hybrid of both on-site servers and a Cloud-based solution.  For a medium-size business, this is often the best of both worlds.  For example, keep your data on an in-house file server so you have local, secure access to your information; but use a hosted solution for email.  Email servers take a lot of work and are difficult to manage.  While I’m more than happy to take care of your email server, using a hosted email option may be the most cost-effective for your organization.

Give us a call today, and we can help find the best solution for your business!

Full disclosure:  Custom Systems uses Office365 to host our email and file services.  This article was written on my laptop, but then stored on a hosted SharePoint server for the editor to review and fix my spelling and grammar.

ChaseChase Reitter
Network Consultant
Custom Systems Corporation
Chase.Reitter@CustomSystemsCorp.com

 

 

© Copyright 2014 Custom Systems Corporation