It is common for IT roles to face criticism within organisations. At times when everything is working as expected – the perception can be that IT teams are resource heavy and demand/consume large budgets. But how many of us are actually exposed to what happens “behind the scenes” when delivering enterprise IT?
It’s easy to misconceive the importance of the technology we use; take a simple application like e-mail, for example. With the average employee receiving 126 e-mails a day, it is one of the most commonly used communication tools, which means it’s essential that it’s well maintained. Ensuring companies continue to run “Business as Usual” is something that requires planning, resources and constant innovation. Without this, organisations can very quickly become outdated and “left behind”.
Below we explore some common and complex elements of system design that end-users often take for granted:
Successfully implementing new technology projects (often designed with the primary aim of providing business impact and benefit) requires proper planning and co-ordination with stakeholders across an organisation. It is common for projects and updates to get 90% of the way, having invested countless time and dedication of resource, to be u-turned and unable to be deployed. This means that involving the right people and ensuring every touch-point is noted along the project journey is key, as there are often elements that affect other systems or workflows.
Protecting the technology that powers the workforce is one of the hidden, but most complex tasks IT teams are faced with. It’s something that’s very time consuming and hopefully something end-users don’t have to get involved in. Hackers attack on average over 2,200 times a day – this, combined with the 67% rise in security breaches since 2014 means that security is a big job for IT teams. Sadly in many cases it takes a security breach and direct business impact to make companies realise just how important it is.
Security is also often met with resistance by users, who may feel that that strict measures are a hindrance to their role, or what they are trying to achieve. They may struggle and become frustrated with what feels like road-blocks when trying to complete urgent tasks. Why has an e-mail been held for review? or, I need to install this application urgently to work on something. This is common dialogue we come across regarding even the simplest security layers. But what businesses need to understand is that security needs to be acknowledged and adopted by users as protection and prevention ideology, rather than a productivity block.
Designing architecture with high availability is essential when working with critical systems and applications. Issues can occur at multiple points within the system architecture, and can also be due to external factors.
Let’s look at e-mail again for example; a business is running an e-mail server from DC1 (Data Centre 1). If the DC loses connectivity / power, the e-mail server is uncontactable and the service stops running – the e-mail has nowhere to route. The internal IT team have no ability to resolve the issue, as it’s an issue with the Data Centre provider, so the system is down until the problem is resolved. High availability is necessary here to minimise downtime – so in this scenario an IT team can make e-mail a “highly available” application by implementing a fail-over or secondary e-mail server at another data centre location, DC2. The secondary mail server routes all the traffic, end-users are unaware of any impact to their operation, and the incident is resolved. We’re using this scenario as an example to explain the concept of high availability, although technically it can be implemented at many levels within the environment.
Amidst the increasing adoption of cloud-based solutions, technology (for now at least) mostly still comprises of physical infrastructure, which can be broadly split into the below categories:
- Endpoints – desktop / laptops / thin clients
- Networking – switches / firewalls / routers
- Servers & storage
For most end-users, the infrastructure begins and ends with the desktop – the actual devices behind a corporate network are rightfully, never seen. And at a high level, it doesn’t take much physical “kit” at all to create an enterprise or small business network. Spin up some servers, create a network, patch your devices and you’re good to go. However as an organisation grows, adding applications and technology to its “operational fleet” – tracking and managing the constant expansion of the hardware estate creates a lot of work. Like any hardware, all devices have a usable life-span, and even through their usable life they may require part replacements or repairs from time-to-time. When working with a large IT estate it can become quite a task to keep assets operating as expected. Aircrafts spend around 10% of their life undergoing maintenance and repairs – something that directly affects you as the passenger. The same is true of technology infrastructure, in order to keep it performing with optimal output, the hardware needs continual maintenance. Something as simple as single network switch failure could have disastrous impacts, especially if these switches are powering critical equipment or devices that do not have a secondary network connection.
In summary, there’s a lot that IT teams do – often behind the scenes, in order to maintain BAU for their workforce, (yes, there’s more to them than simply rebooting devices!) It’s common for people to question the level of investment in an organisation’s IT infrastructure, as well as why continual upgrades and developments are often required. The answer to this (a theme that is continuous throughout this post), is to maintain up-time for business-critical services, reduce risk and ensure the businesses runs smoothly.