Home > Articles


The Advantages of Cloud Computing  

Here are some advantages to think about when considering a cloud solution:

    • Employees can access documents and collaborate from anywhere. Excellent for international companies.
    • Very fast to get up and running.
    • Cost effective. No need to buy and install servers, applications. Less IT staff required.
    • No datacenter required, especially if you don’t already have one.
    • You typically only pay for software services and not hardware.
    • You don’t need specialist IT staff.
    • Less outages.
    • Leverage the providers’ state of the art datacenter. Power, cooling, redundancy, high speed internet access, etc. Compare this to setting up your own state of the art datacenter.
    • No need to provision for extra resources to accommodate growth or peak load.
    • You buy extra resources as you need them.
    • Easily scale up and down.
    • Reduces carbon emissions
    • You don’t need to power (and cool) an entire server as you pay for only what you use.
    • Instead of running your own datacenter, you share the provider’s datacenter.
    • Cloud providers’ datacenters are becoming more efficient and eco-friendly; better building systems, insulation, etc.
    • They’re better at maximizing resource utilization.
    • They use vanity-free software, which reduces plastic utilization.

    About Open Source  

    What are the advantages of open source software?

      • Open source software has matured to the point of rivaling top proprietary software.
      • No vendor lock-in. Some of the vendor lock-in issues are:
      • Lack of portability.
      • If you don’t upgrade as per the vendors’ schedule, you might lose support.
      • Inability to customize software since vendor software is source locked. You get the source code with open source software.
      • No license and maintenance fees. You only pay for support. You have to look at Total Cost of Ownership though, as it may not always be straightforward.
      • You may not even need to buy support since some open source software have a huge users’ forum that can deal with almost any issue you encounter. Mature software like Apache and MySQL are excellent examples where you may not even need a support contract since these software are reliable and have a huge community behind them.
      • You’re not locked-in to the vendors’ support. You can find support consultants for open source projects that know the software enough so you’re not required to buy support from the authors of the software.
      • You can “try before you buy” with open source because the software is free to download and install.
      • Open source software is much better at adhering to open standards than proprietary software making it more interoperable with other systems.
      • As requirements of a business change, it makes it flexible to change a particular component of the architecture without being locked to an entire vendor architecture.
      • Open Source projects have very little motivation to attempt this kind of vendor lock-in strategy because they don’t have commercial benefits. Adherence to standards is more important.
      • You can find open source applications for all the common enterprise applications.
      • A large number of businesses and government organizations are now using open source software.
      • Open source software has arguably better quality and security:
      • Since source code is available users find bugs and not only quickly report them but they often fix them themselves.
      • Major defects are usually fixed within hours. These include security vulnerabilities and exploits fixes.
      • With proprietary software, what typically happens is a defect report needs to be filed and then the vendor determines whether to issue an updated release. There is considerable delay in that process so users are more at the mercy of the vendor’s internal processes.
    • Open source software is auditable. You don’t have to take the vendors’ words when it comes to quality, security, adherence to standards because it’s source-locked.

    What is Point-to-point Encryption (P2PE)?  

    With hackers increasingly stealing credit card data while in transit using packet sniffing methods, there is a requirement to encrypt data before transmission. Point-to-point encryption (P2PE), also known as end-to-end encryption (E2EE) secures cardholder data (debit and credit card data) from the moment the card is swiped (or key entered) all the way to the payment processor. Using this technology at the point of sale (POS), the merchant doesn’t have to store, process or transmit the data in the clear.


    Since the POS devices have this encryption technology and the merchant does not have the means to decrypt it, the data is deemed out of scope by PCI DSS (Payment Card Industry Data Security Standards) and exempt from PCI compliance requirements. The P2PE hardware vendors typically host the data in PCI Compliant data centers thereby reducing the merchant’s security and compliance risks.

    Note that, P2PE technology reduces the scope of the cardholder data environment but does not altogether mitigate the PCI DSS. There will also need to be a global standardization of P2PE for the PCI CSS (PCI Security Council) to create specific guidelines to help merchants. The PCI guidelines should also cover software only implementations as well as hybrid software-hardware P2PE implementations.

    You can find the PCI Security Council’s guidance on P2PE here.

    What is Tokenization?  

    With P2PE above, we talked about securing data in transit. What about data that is stored? The number one reason behind cardholder data security compromises is the merchants’ inability to protect their customer’s stored credit and debit card data properly.

    PCI DSS has strict requirements concerning the storage of sensitive cardholder data within software applications. Software providers can protect customers by implementing a PCI compliant offsite data storage solution that utilizes tokenization technology. The service provider handles the issuance of the token value and bears the responsibility for keeping the cardholder data locked down.

    Tokenization secures cardholder data by replacing sensitive information with randomly generated values; tokens. The service provider issues the merchant with a driver that generate tokens. These tokens typically contain the last four digits of the credit card number. When an authorization request is made to verify the legitimacy of a transaction, a token might be returned to the merchant instead of the card number, along with the authorization code for the transaction. The token is stored by the merchant while the actual cardholder data is stored in a secure token storage system by the service provider.

    The difference between encryption and tokenization is that, encryption works by performing a mathematical operation on the original data and converting it to random characters. To retrieve the original data you reverse that mathematical operation and turn the random data back into the original data. The encrypted data can theoretically be compromised given a large enough sample of encrypted data and enough time. However with tokenization, because there is no mathematical relationship between the original data and the token, it is theoretically impossible to derive the original data from a token, no matter how large a sampling of tokens you have.

    Tokenization can reduce merchants’ PCI requirements by descoping systems that do not store sensitive cardholder data. It eliminates the need for onsite credit card storage.

    Although we talked about credit card information, Tokenization can be used to protect any kind of sensitive data.

    PCI Tokenization, P2PE & Virtualization Downloads  

    You can download the official documents from the PCI Security Standards website.

    PCI Data Security Standards (PCI DSS) Tokenization Guidelines Information Supplement: Download

    PCI Point-to-Point Encryption – Solution Requirements and Testing Procedures: Download

    PCI Virtualization Guidelines: Download

    PCI Virtualization Guidelines  

    Virtual Evironments

    A couple of things to think about if you have your Cardholder Data Environment (CDE) in a virtualized environment.

    Virtual Machines (VMs) running on top of a single hypervisor are considered to be in the same trust zone. So, if one of your VMs on a hypervisor is in scope for PCI, then the rest of the VMs on that hypervisor are in scope. Essentially, if one VM is in scope, the hypervisor is in scope. You can easily segregate your CDE environment on a separate hypervisor but it becomes more complex when you move VMs between hypervisors for failover. After all, seemless provisioning of VMs, automatic failover and the ability to move VMs around for performance requirements are some of the main reasons for virtualizing.

    Another point to consider is virtual firewalls to monitor traffic between VMs on the same hypervisor because your traditional firewalls only monitor traffic coming out of the hypervisor onto the network. This might also allow you to mitigate the issue above if VMs are sufficiently isolated from each other such that they could be considered “separate hardware on different network segments”. But here is what the guideline says in section 4.1.5; “Similarly, processes controlling network segmentation and the log-aggregation function that would detect tampering of network segmentation controls should not be mixed. If such security functions are to be hosted on the same hypervisor or host, the level of isolation between security functions should be such that they can be considered as being installed on separate machines.” And in section 4.2 it says; “Even if adequate segmentation between virtual components could be achieved, the resource effort and administrative overhead required to enforce the segmentation and maintain different security levels on each component would likely be more burdensome than applying PCI DSS controls to the system as a whole.” So the recommendation from PCI is to stay away from this as much as possible.

    If a Virtual Machine (VM) stores, processes, or transmits cardholder data, it is in scope. This is true for server or desktop VMs so you should evaluate your VDI deployment as well.

    Cloud Computing Environments

    Now, looking at Cloud Computing environments, if your cloud service provider is PCI compliant do you automatically become compliant as well? The answer is no, unless you have a binding agreement with your service provider. This is because PCI considers you to be responsible for you data; always.

    In IaaS (Infrastructure as a Service) environment, your service provider is responsible for the physical facility (data center), computer hardware, and network hardware while you’re responsible for the virtual infrastructure including the hypervisor and the VMs as well as the operating system, databases, software and data.

    In a PaaS (Platform as a Service) environment, your service provider is responsible for the physical facility (data center), computer hardware, and network hardware, the virtual infrastructure including the hypervisor and the VMs as well as the operating system and databases while you’re responsible for software and the data.

    In a SaaS (Software as a Service) environment, your service provider is responsible for the physical facility (data center), computer hardware, and network hardware, the virtual infrastructure including the hypervisor and the VMs as well as the operating system, databases and software while you’re responsible for the data.

    So, at the end of the day, you are responsible for your data.

    Please read the Virtualization Guidelines on the PCI Security Standards website.

    Consultant IT Manager  

    Consultant IT Manager

    This role is here to bridge the gap between IT and the rest of the business. IT is not overhead anymore and has become an enabler in any business. Organizations are streamlining the way they do business by employing various technologies. They range from ERP solutions that integrate the entire business architecture, to Cloud Computing where businesses can serve their clients no matter which part of the world there in, to VPN solutions that allow employees to work remotely, to IP telephony that provides employees a phone number that follows them no matter where they travel to. The list is endless. But for each piece of technology a company decides to employ, there are a multitude of vendors and options to choose from that make the decision making process quite daunting. No matter how good the technology is, at the end of the day, it needs to contribute to the bottom line; it has to either make you money or it has to save you money. You can’t deploy technology for the sake of deploying it or the industry is touting it as a must have for any business. You need to get down to the details of your business objectives and draw out your requirements for enabling technology. You have to work from the end by starting with the business end results and working backwards to the technology. Don’t be surprised if you find out the old technology you have is good enough for the requirements you have.

    The Consultant is there to build this IT organization that evaluates and deploys technology that contributes directly to the company’s bottom line. It needs to be measurable. Therefore part of building out this organization is instilling the end goal of the department into each member of this IT organization. IT administrators need to know that they’re not there to keep the lights on. They need to be an integral part of the business and continually communicate with the various business units to understand the technology requirements that will help everyone achieve the end goal, which is the success of the company. The Consultant builds this “way of thinking” or culture into the IT department and engages other business unit leaders into working with the IT department towards to the common goal.

    The success of the Consultant is evaluated by the ability of the IT department to efficiently support the company without him/her. To that end, the Consultant would have been replaced by an individual from the team that has shown the grasp of the IT business concept and shown an ability to work across the organizational structure and is able to provide measurable outcomes to the bottom line. The new leader would have shown consistently, an ability to select technology that delivers business value.

    The Case for Managed Services  

    The Value of Managed Services

    Following on the concept of the “Consultant IT Manager”, let’s talk about the value of Managed Services. Managed Services is where you outsource your IT to an external company that charges for services they provide you with. This is a simplistic way of defining it but it suffices for our discussion here.

    As outlined in Consultant IT Manager, the goal of the IT department should be to select and deploy technology that directly affect the bottom line of the company. To that end, the more you can free up your IT staff from the mundane support services or just keeping the “engine running”, the more time they have to focus on your company’s business goals and work on business value-add services. When they’re not bogged down with the day-to-day break-fix they come up with innovative ways to run the business and achieve its goals. This is where the value of Managed Services lies. By outsourcing your IT, you can know focus on making better shoes if you’re a shoe maker. Or deploying technology that reduces the cost and/or the time of making shoes. If you’re a mining company, you focus on mining and enabling your employees work more efficiently and safely by employing technology. If IT is not your core business, why focus on it when you can outsource it. And since Managed Services providers work with various clients and hence experience different ways of doing the same things, you get to leverage their experience. No reinventing the wheel here. And at the same time, you have an IT organization built on the “Consultant IT Manager” concept so you can keep your Managed Services providers honest.

    Another added benefit of outsourcing is that you make things like disaster recovery and governance less complex. For example, let’s say you outsource your data center to a service provider. If you select a service provider that has solid disaster recovery setup and their data center has various compliance certifications, then you don’t have to worry about these things. Disaster recovery and compliance are expensive undertakings so you can save a lot of money here. Especially if your service provider also manages your IT and intimately knows your business processes, then you can sleep sound at night and not worry about your company’s disaster recovery plans or security and you can focus on your bread making or whatever your core business is. It’s not as simple as I’ve made it out to be here but you get the point. As I’ve mentioned on the PCI compliance post this month, even if you put your data in the cloud, PCI still considers you responsible for your data and not the service provider. But, you can go a long way to reducing your IT infrastructure’s and processes’ complexities and reduce your IT costs. And with mature technologies like cloud computing, you can outsource your IT services and easily bring them back in-house anytime you want without too much difficulty. All these decisions are now driven by your business needs and not driven by technology. That’s the beauty of it. No matter how big your company is, it makes you nimble and adaptable.

Copyright © 2012-2017 Yared Consulting Inc. All Rights Reserved. | Privacy Policy | Terms & Conditions