How to become a tough guy (Systems Administrator)

By – Noah Cain

 

 

500!

 

500 broken computers!

 

That’s the number I figured when I was a tech noob!

 

500 computers repaired to working perfection before you could consider yourself a tough guy systems admin!

 

You need that many for experience! To develop leather fingertips! To learn all the motherboards and the different types of processors. The R.A.I.D. configurations and how to manipulate a 1950 into letting you only use 1 disk.

 

So…I got started!

 

Of course, along the way, you stop thinking about how to be smart and all that, about learning all the different types of laptops and desktops, tablets and smart phones…it stops being the point!

 

You get past the silliness of it all…past the ever changing names and hardware of technology.

 

But then, AFTER, you realize that is what you are! You are a systems administrator. You can fsck a machine…even a freebsd machine. You can get CPanel to work on a Debian machine, not that you would want to, but you could do it. You start to see things in a different light. You start to see computer issues as much more than just common bugs. You start to see them as imperfections in a perfect environment. You start to hate them. They glare at you and make the people needing the help think of you as the problem.


I’ll tell you, you learn a lot of things on the way to 500, but none more important than this: your systems administrator is only here to help you succeed. You should remember that they are human, they have bled for the knowledge that seemingly everyone else seems to think is very easy to learn and to master. They have forgotten more information about technology than you will most likely learn in your life. Most of them can’t change the oil in their cars, but most of them can quote every movie you have ever seen…especially ones with awesome quotes like this one from “The Knockaround Guys”.

IT Continuity Planning

Sprocket NetworksBy: Sofiane Chafai

Today most organizations have committed resources, developed policies, procedures, and tools, and set their organization and IT infrastructure to maintain their critical business process (Business Continuity Plan) and recover to their normal activities (Disaster Recovery Plan) as quickly as possible during unforeseen circumstances and major outages.

Having a plan for these situations is not straightforward; the planning tasks are challenging and require several expertise and efforts.

In summary, the following details should be included in the IT continuity plan:

  • IT and business core process list
  • GAP analysis exercise outcome which includes the Recovery Time Objective and Recovery Point Objective for each process and component
  • IT architecture
  • Roles and responsibilities during contingencies and recovery
    • IT continuity procedures
    • IT recovery procedures
  • Invocation procedures (call tree)
  • Damage assessment
  • Tests plan
  • Contact details (staff, vendors, stakeholders, rescue services, hospital, etc.)

The IT continuity plan includes four stages:

  • Initial response
  • Relocation
  • Recovery
  • Restoration

Initial response includes the first following processes: Notification and plan activation,

Relocation mainly covers staff relocation schedules, logistics, and transportation to the alternate site, activation of the alternate site (IT equipments, telecoms, servers, etc.)

Recovery includes the damage assessment of primary facilities, initiation and completion of recovery tasks

Restoration requires verifying and confirming primary facilities and infrastructure readiness, staff relocation schedules from the alternate site to the primary site, restoring business files, consolidating and archiving incident documentation, returning to business as usual.

In practice, how to build your plan (dos and don’ts)

You need to have a valid business case. Management commitment is probably the first and most important requirement to succeed and have a sustainable IT continuity plan.

Today most organizations have developed business continuity planning and set their IT infrastructure, process, and business model to reduce the impact of natural disasters and outages they might face, but how many have an annual program testing of their plan to identify all areas where improvements are needed?

Companies need to conduct a gap analysis exercise to assess their plan with the standards and best practices in order to identify their weaknesses and develop a roadmap to include all missing elements and take the right steps to implement strategies, so they do not need to start from scratch and do not try to cover all Business Continuity Plan aspects at the same time.

Know your business! The IT continuity plan is a piece of the Business Continuity Plan, hence it needs to be aligned with business strategies and objectives. Wrong or incomplete solutions can waste time and money.

Perform a regular company risk assessment review exercise to ensure all risks are covered and set the plan accordingly. Get more flexibility by outsourcing some IT functions such as the help desk; the company will be less reliant on people in case of contingency, where tasks will be handled through SLA and covered by external vendors. This will help the company to focus on their core business process.

As people are a key element in IT continuity plan, creating a plan that depends on too few qualified people can threaten the overall plan. What if one of those people is unavailable for some reason? You need to identify a pool of employees who are capable of responding in an emergency, and initiate a set of best practices: job rotation, staff mobility in the job contract, a succession plan, and training, to ensure that people are ready to run the plan regardless of their positions or experience in the company.

The IT continuity plan requires a budget that should be included in the annual exercise and company plan. The key point here is to have a proactive approach so management will be aware of the fact that the organization might have to finance the IT continuity plan so appropriate action can be taken.

The BCP should not be an afterthought when preparing the budget. It has to be included in the company plan and discussed. As with the IT continuity plans, management must be aware that the BCP might have to be financed by the organization. External funds may be required.

New trends in technology such as virtualization, mobile devices, cloud computing, and social media need to be assessed.

Many new technologies introduce complexity, so maintaining the IT environment may require skills and resources. Reduce complexity and keep it simple for operational staff to run and eliminate potential sources of human errors.

To reduce costs of having to buy, rent, and maintain alternate facilities, a disaster recovery site, datacenters, etc., organizations should look for mutual agreements with other companies to share IT infrastructure and office desks in contingency situations.

Organizations should also consider leasing or procuring new IT infrastructure (including data communications) and arranging with suppliers to have them carry a contingency stock of IT equipment, software, etc., to be available at short notice.

In contingency situations, phone communication and the primary carrier might be down. Then you will have to plan for multiple communication options and make sure everyone knows the options and has the appropriate phone numbers, web addresses, and emergency contacts to get and stay in touch.

Password protection is a key goal of data security, IDs and password need to be stored in two geographically separate and secure locations and more than one IT staff person should have access to all passwords and codes.

Every major application enhancement, technology infrastructure change, or new service offering should have its own BIA (Business Impact Analysis) and risk management reviewed for applicability, along with its RTO (Recovery Time Objective) and RPO (Recovery Point Objective) to ensure that change management is embedded during the Business Continuity Plan lifecycle.

The Business Continuity Plan is an ongoing process which will not stop after testing. It has to be maintained and updated as required

Tests will familiarize staff and IT teams with the continuity and recovery process. They will verify the effectiveness of the selected strategies and the readiness of the recovery site, and will identify improvements required to the process and infrastructure.

The recovery tests should be conducted at service level, and should avoid focusing on components such as hardware, systems, and applications. A particular service may require different servers, data on several local drives, or user network connectivity.

Organizations are urged to assign individuals and teams to lead, drive, and run the IT continuity plan. Authority should be given to a crisis management team group to make the process effective and sustainable.

Auditing plans and procedures will enable an impartial third-party review of regulations, laws, standards, and best practices and provide recommendations.

Finally, the business’s perception of risk must be changed.

It’s no surprise that risk management and continuity planning often end up siloed into separate functional areas. Changing the perception and culture has to begin at the top level with a top-down approach to the following tasks: putting the organization in place; instituting reporting at the top level to avoid any conflict of interest; including continuity management on the board meeting agenda; ensuring that a continuity section is included in every corporate document; initiating policies and procedures to promote and develop internal control and compliance functions; conducting regular risk assessment to determine changes in the organization’s risk profile and assess performance; and proceeding with regular audits. “The boss knows best” philosophy must be avoided. Top management must listen to and accept others’ thoughts and ideas.

People must be educated through training and awareness programs, brainstorming sessions, and workshops. Use metrics and KPI to assess performance and ensure compliance.

The challenge is to create a situation where people will instinctively look for risk and consider its impact prior to making decision

When you think about processes, setting up new systems, hiring new employees, contracting with vendors, and opening new accounts for customers, you need to think RISK

IT continuity planning trends

Virtualization will make the plan easier by reducing the number of IT assets which need to be maintained, supported, and reviewed. We will have fewer devices to worry about, and the RTO can be reduced by switching quickly to virtual machines from live environment to backup.

Desktop virtualization can enable people and company staff to work off-site, at home through Citrix and DVI, which allow flexibility for the organization to recover quickly and get people on board without having to invest in alternate sites areas, reducing the cost of maintaining a wide alternate site for their employees. This needs to be secure through appropriate tunneling with data leakage protection installed on the machine.

The deployment of virtual machines over the internet can be an alternative to allowing staff access through their personal home computers, making them more productive by using the environment they are familiar with during outages.

Cloud

As applications (SaaS), platforms (PaaS), or infrastructures are delivered from the cloud, an organization can mitigate and drastically reduce the risk of major or minor disruptions. The drawback for IT is the additional responsibilities involved in managing third parties through an efficient problem management process and services level agreement to ensure that third-party suppliers have resources in place, failover systems, people and processes to maintain the same level of services and guarantee data availability regardless of disruption and outages faced at supplier level.

This exercise can become more complicated in the future. As more and more companies outsource services to the cloud, the process will have to include several suppliers and services for maintaining the plan and proceeding with required testing and audit reviews.

Mobile devices

Getting more mobile devices in the workplace will definitely improve business continuity strategies. It has become easier to communicate during disaster through computer tablets, smartphones, and Blackberries, which gives more flexibility for workforce recovery options by accessing the corporate applications, communicating with coworkers, customers, and vendors from multiple remote locations. More software designed for mobile devices enables users to access information needed during crisis situation, such as status of recovery, recovery site location, list of applications and services available, and, finally, emergency updates.

Social networks

An article published by Forrester in July 2012, “It’s Time to Include Social Technology in Your Crisis Communication Strategy,” stressed the fact that subscribing to automated communication services is now common and widely used by many professionals. The proliferation of mobile devices and easy Internet access enable the use of social technologies such as Twitter, Facebook, and Skype as elements of business continuity and recovery strategies.

Organizations should leverage and assess technologies to make their response plan effective. They need to look at which platform is actually used by employees, customers, and vendors. These channels can be used for both communicating and getting information and help from external resources to improve the business continuity and recovery process. The drawback is more uncontrolled spreading of information outside, which can damage the organization’s reputation and make the crisis communication process more complicated.

Sprocket Networks – Colocation Deal-Full Cab $389 with 20A Power and Network Included

________________________________________

Sprocket Networks in Dallas, Texas, a 10-year veteran of enterprise hosting, is providing an end of year colocation special in our own SSAE-16 audited and certified data center.
___________________________________________

Sprocket Networks full cabinet colocation special is $389 per month on a month-to-month term with $389 setup fee (want to lower your setup fee, you can sign up for 24 month-half off setup fee or 36 month-no setup fee). Here is everything you get for only $389:
* Full 42U Private Cabinet
* 1 Mbps blended IP on a 100Mbps port
* 1 20Amp A+B power circuit 
* UPS protected power
* Generator Protected Power
* /29 CIDR (8 total / 5 usable IPs)
* Free initial setup
* Free reverse DNS
* Free IP/KVM as needed for troubleshooting
* Free remote hands support
* 24/7 Support
$389.00 per month

ONLY NEED A HALF CABINET?
* 20U ½ Private Cabinet
* 1 Mbps blended IP on a 100Mbps port
* 1 20Amp A+B power circuit 
* UPS protected power
* Generator Protected Power
* /29 CIDR (8 total / 5 usable IPs)
* Free initial setup
* Free reverse DNS
* Free IP/KVM as needed for troubleshooting
* Free remote hands support
* 24/7 Support
$289.00 per month with $289 setup fee that can be waived 
with a 36 month contract or half price with a 24 month 
contract!
___________________________________________

We have space available now for immediate activation and you can be online the same day.

Your cabinet will be hosted in Dallas, Texas in Sprocket Networks wholly owned, fully certified data center that provides 24/7 support. If you would like a tour, please let us know.

This is a month to month deal but if you sign a 24 month contract, you can cut your setup fee in half or on a 36 month contract, there is no setup fee.

If needed, we can customize your order any way you would like.

This is a special for new clients only.

Please contact us at: salesticket@sprocketnetworks.com or call us at 214-855-5020 for any questions or to place your order!

Sprocket Networks Has Joined Forces, Expands Capabilities With Opus-3 Data Center

Sprocket Networks

DALLAS, Dec. 16, 2013 – Sprocket Networks has joined forces with Opus-3 Data Center, expanding its capabilities and resources into a global platform. Sprocket Networks is a privately held hosting solutions company based inDallas, Texas, that specializes in Colocation, Dedicated Hosting, Cloud Hosting, and Managed Services.  Sprocket Networks is an industry veteran with over 10 years of hosting business success with customers in 27 countries on 6 continents and in over 35 states in the United States. This was a private transaction and terms were not disclosed.

“With Sprocket Networks joining with the Opus-3 Data Center global platform, clients will now have a broader range of capabilities and services they can choose from to best meet their needs,” says Justin Clutter, Sprocket Networks CTO. “Not only can Sprocket Networks provide the services our customers have come to expect from us,” says Justin Clutter, “we now have a stronger technical staff, greater capacity, and new offerings such as Virtual Data Center Services and Disaster Recovery and Business Continuity solutions.”

“We are very excited about Sprocket Networks joining with our Opus-3 Team,” says David Herr, COO of Opus-3. “Our commitment at Opus-3 has always been to exceed customer expectations. One aspect that is so attractive about Sprocket Networks is their culture to provide the best customer service. Our commitment has always been to best serve the client, first and foremost. With Sprocket Networks and Opus-3 joining forces, it will be easier and faster for organizations to adopt the game-changing cloud services that we provide.  Opus-3 will enable Sprocket Networks to deliver the security, privacy and reliability of private clouds with the economy and speed of a public cloud.”

About Opus-3
Opus-3 Data Center provides SOC1, SOC2 & SOC3 Tier III Data Center solutions on a global basis that are fully SSAE16 Type II certified. Opus-3 offers custom and scalable Colocation, Private Cloud Hosting, Virtual Data Center Services, Dedicated Servers, Virtual Servers, Managed Services, Mass Storage and Disaster Recovery / Business Continuity solutions.

http://www.prnewswire.com/news-releases/sprocket-networks-has-joined-forces-expands-capabilities-with-opus-3-data-center-236033041.html

The Benefits and Threats of Moving to the Cloud

By: Ninja@s3c

What is the cloud, anyway?
The term may be new, but the concept certainly is not. Throughout the history of computing, IT organizations have been using their own infrastructure to host applications, data, servers etc. Now most of them are renting the infrastructure, with remote servers to host their application or data. Organizations called service providers exist especially to provide, manage and maintain the infrastructure on which their client organization’s application or data are hosted. The client organization gets access controls to manage their applications and data hosted on the remote server. This is the main idea behind cloud computing.

More precisely, cloud computing is a method of accessing, delivering and managing IT services over the internet. Network resources are provided to customers on demand. As a customer, you need not own infrastructure, you just have to just rent or pay for what you use to your cloud service provider.

Benefits of Cloud computing:

Cloud Hosting

Cloud Hosting

The organization may get benefit in terms of reduced cost, online support to flexibility. However, the major benefits can be summarized as:

Location Independent: As a customer, you need not worry about where your data is hosted. You can access and manage them from virtually anywhere in the world. All you need to be is to be connected to the Internet.

Low Total Cost of Ownership: Since you use the service provider’s infrastructure and resources, you are exempted from the cost of setting up your own.

Pay-as-per-you-use: The most appealing thing is the option for pay for what or when you use. That suits well under your organization’s budget.

Support: As service providers host your data on their infrastructure, the onus for maintaining and supporting the client’s request is on them.

Secure and storage management: The service providers securely manage your data, and do have a backup and disaster recovery plan. Therefore, your data is always safe.

Scalability and Sustainability: Service providers have large infrastructure, high-end processors and memory devices that you may rent as per your requirements.

Resources are dynamically allocated between users. Additional resources are dynamically released when needed.

Highly Automated: Your IT personnel do not need to worry about keeping software up to date.

Maintenance: Maintenance of cloud computing applications is easier, since they don’t have to be installed on each user’s computer.

Types of Cloud Computing:

Infrastructure-as- a-Service (IaaS): Infrastructure-as-a-Service gives the customer a virtual server / storage with unique IP address. Amazon web services are an example. The user’s application interface accesses the virtual servers and storage hosted by Amazon to read books online.

Platform-as-a-Service (PaaS): Platform-as-a-service is services like Software development tools hosted in provider’s server and customers can access them with APIs. The users execute the application in the platforms hosted by the cloud provider through the platform or Application Program Interface (API). Google Apps is an example of platform services.

Software-as-a-Service (SaaS): Software-as-a-service model, software along with data resides in providers cloud and end customers can use both on a contract basis from the provider.

Challenges faced by the Organization:
The basic issues that an organization may face can be categorized as the following:

Privacy: You are never sure if the service provider can monitor your data, be it sensitive or not.

Security: Security concerns arise because both customer data and programs reside in the Provider premises.

Availability: The cloud service provider needs to make sure the system is available for its consumers. There are service level agreements (SLA) between the cloud service provider and the consumer that is related to the availability and performance.

The following picture describes how you lose your control over data and other resources as it moves from your dedicated environment to premises of your services provider.

Cloud Hosting

Cloud Hosting

As you can see, the blocks in green shows the resources under your control, the blue blocks show when you are sharing the resources with your service provider, and finally the orange blocks depict the features under the control of your service provider. If you are hosting data, servers in your environment you have maximum control over them. However, as soon as you are renting out the resources from service providers and finally move your resources to actual cloud, you can’t personally control or manage them. However, the cloud provider gives you access controls through which you can manage and control your data and other resources.

So still some organizations, especially smaller ones, are skeptical about it despite the fact that the cloud is much more cost effective for them. Rightly so, because they may have concerns about the following:

Accessibility issues: Organizations may face problems with accessing resources from the cloud if some communication outage happens due to attacks such as denial of service and distributed denial of services.

Authentication issues: There is a chance that due to some TCP/IP related attacks like IP spoofing, RIP attacks, ARP poisoning and DNS poisoning in which routing tables can be altered, organizations may not be sure of its trusted machines’ authenticity. For more information on these types of attacks, check out the CCNA security course that’s offered by Intense School.

Data Verification, tampering, loss and theft: While on a local machine, while in transit, while at rest at the unknown third-party device, or devices, and during remote back-ups.

Information transmitted from the client through the Internet poses a certain degree of risk, because of issues of data ownership; enterprises should spend time getting to know their providers and their regulations as much as possible before assigning some trivial applications first to test the water

Data segregation: Data in the cloud is typically in a shared environment alongside data from other customers. The cloud provider should give evidence that encryption schemes were designed and tested by experienced specialists.

Recovery: A proper recovery and backup plan should be in place. Any offering that does not replicate the data and application infrastructure across multiple sites is vulnerable to a total failure. In addition, the timeframe within which restoration will be complete is a concern.

Physical access issues: Both the issue of an organization’s staff not having physical access to the machines storing and processing a data, and the issue of unknown third parties having physical access to the machines.

If we summarize, these are some top potential threats of cloud computing that must be thought about instead of moving to the cloud blindly. The Cloud Security Alliance identifies following potential risks in the Top Threats to Cloud Computing V1.0 research report:

  1. Abuse and Nefarious Use of Cloud Computing: Cloud providers offer their customers the illusion of unlimited computing, network, and storage capacity. There are registration processes where anyone with a valid credit card can register and immediately begin using cloud services. By abusing the relative anonymity behind these registration and usage models, spammers, malicious code authors, and other criminals have been able to conduct their activities with relative impunity.

Cloud computing providers are actively being targeted, partially because their relatively weak registration systems facilitate anonymity, and providers’ fraud detection capabilities are limited. Criminals continue to leverage new technologies to improve their reach, avoid detection, and improve the effectiveness of their activities.

Examples: IaaS offerings have hosted the Zeus botnet, Infostealer Trojan horses, and downloads for Microsoft Office and Adobe PDF exploits.

  1. Insecure Interfaces and APIs: Cloud computing providers expose a set of software interfaces or APIs that customers use to manage and interact with cloud services. The security and availability of general cloud services is dependent upon the security of these basic APIs. From authentication and access control to encryption and activity monitoring, these interfaces must be designed to protect against both accidental and malicious attempts to circumvent policy.

Reliance on a weak set of interfaces and APIs exposes organizations to a variety of security issues related to confidentiality, integrity, availability and accountability.

Examples: Anonymous access and/or reusable tokens or passwords, clear-text authentication or transmission of content, inflexible access controls or improper authorizations, limited monitoring and logging capabilities, unknown service or API dependencies.

  1. Malicious Insiders: This threat is amplified for consumers of cloud services by the convergence of IT services and customers under a single management domain, combined with a general lack of transparency into provider process and procedure. For example, a provider may not reveal how it grants employees access to physical and virtual assets, how it monitors these employees, or how it analyzes and reports on policy compliance.

The impact that malicious insiders can have on an organization is considerable, given their level of access and ability to infiltrate organizations and assets. Brand damage, financial impact, and productivity losses are just some of the ways a malicious insider can effect an operation.

  1. Shared Technology Issues: Often, the underlying components that make up this infrastructure (e.g., CPU caches, GPUs, etc.) were not designed to offer strong isolation properties for a multi-tenant architecture. To address this gap, a virtualization hypervisor mediates access between guest operating systems and the physical compute resources. Still, even hypervisors have exhibited flaws that have enabled guest operating systems to gain inappropriate levels of control or influence on the underlying platform. Attackers may focus on how to affect the operations of other cloud customers, and how to gain unauthorized access to data.
  1. Data Loss or Leakage: There are many ways to compromise data. Deletion or alteration of records without a backup of the original content is an obvious example. Unlinking a record from a larger context may render it unrecoverable, as can storage on unreliable media. Loss of an encoding key may result in effective destruction. There is damage to one’s brand and reputation and a loss could significantly impact employee, partner, and customer morale and trust.
  1. Account or Service Hijacking: If an attacker gains access to your credentials, they can eavesdrop on your activities and transactions, manipulate data, return falsified information, and redirect your clients to illegitimate sites. Your account or service instances may become a new base for the attacker. With stolen credentials, attackers can often access critical areas of deployed cloud computing services, allowing them to compromise the confidentiality, integrity and availability of those services.
  1. Unknown Risk Profile: Versions of software, code updates, security practices, vulnerability profiles, intrusion attempts, and security design, are all important factors for estimating your company’s security posture. Information about who is sharing your infrastructure may be pertinent, in addition to network intrusion logs, redirection attempts and/or successes, and other logs. When adopting a cloud service, the features and functionality may be well advertised, but what about details or compliance of the internal security procedures, configuration hardening, patching, auditing, and logging? How is your data and related logs stored, and who has access to them? What information if any will the vendor disclose in the event of a security incident?

Have You Outgrown Shared Hosting?

img-dedicated

The advancement in technology has enabled the majority of businesses, both small and large, to significantly rely on technology resources to enhance their outputs and improve their competitive advantage. An element of technology resources that modern businesses are using is web hosting. The most commonly used type of web hosting is shared hosting. In shared hosting, a web hosting provider uses one dedicated server to support numerous websites that belong to different clients. Although different businesses will share one server in shared hosting, they have safe and secure access to their websites. Despite the benefits that you may be gaining from using shared hosting such as its affordability and its efficiency, there is need for you to know when is shared hosting not enough.

When Is Shared Hosting Not Enough

You may consider shifting from shared hosting to other types of web hosting such as cloud hosting or a Virtual Private Server when you have a significantly large website that not only has a high degree of traffic but also has specialized requirements. Such a website will require you to improve your hosting infrastructure in order to enhance the reliability, performance, security and scalability of your website, which cannot be achieved using shared hosting.

Shared hosting is not enough when you constantly experience performance problems that are associated by sharing a server with heavy users. Performance problems in shared hosting are often caused by limited storage space. The incapacity of your website to function optimally as it is expected to due to constant outages may mean that your website visitors are not having pleasant experiences. When this occurs, it is a clear sign that shared hosting is not enough for your business and it may be time to identify other better, more reliable and scalable, dependable and efficient types of web hosting to use for your business.