Checklist 2.0 News
SEC guidance: Material information regarding cyber security risks and cyber incidents is required to be disclosed
SEC has issued a new guidance on October 13, 2011 regarding disclosure of material information regarding cyber security risks and cyber incidents. This corporate finance disclosure guidance is not a rule, regulation, or statement of the Securities and Exchange Commission(SEC). The guidance requires one or more cyber incidents materially affect a registrant’s products, services, relationships with customers or suppliers, or competitive conditions, the registrant should inform SEC.
A Better Metric for Analyzing the Value of the Cloud
By JP Morgenthal
November 18, 2009
While you may be reading and hearing about positive financial and the economic analyses regarding the benefits of Cloud computing, those analyses are based on unrealistic total cost of ownership (TCO), return on investment (ROI) and misinterpreted CapEx/OpEx calculations. Indeed, these calculations are missing a complete understanding for the value of the Cloud in the IT service delivery equation, which is why comparing Cloud computing alternatives requires a modern metric that fully understands the service delivery model.
In my opinion, no two metrics were more injurious to the IT field than TCO and ROI. I have yet to meet one senior executive with budgetary responsibility that has accurately tied investment in IT back to a sizable gain or to represent a true TCO. Let’s face it; these are notional concepts at best that were devised to be manipulated to provide IT executives with justifications for funding projects of questionable value to the business.
Before you inundate me with nonsensical comments about investments by the likes of Amazon, Dell and Google, please note, IT is an expense line item for these companies; it is their business models and processes that delivered the profitability that ensued. Indeed, I would go so far as to assert that if you track a specific IT investment, directly acquired to support a singular initiative, for one year forward you may be able to show a positive return relative to that expenditure. However, if you track that investment over next three years, most likely you either have a break-even scenario or a loss.
The reason I assert you will end up with break-even or a loss is because if you start to accurately identify the additional expenditures necessary to maintain and operate that investment, then costs start to dramatically increase relative to this single expenditure. To offset these costs, a business would need on the order of a 25% costs reduction or increase in profits to have a “return”. As I noted above, TCO and ROI are notional concepts designed to be manipulated by those with budgets to justify expenditures. I have yet to see one CIO actually go back and reconcile the initial ROI estimates against the original assertion past year one.
This brings me to the point of this article. Interestingly, it’s IT that loses in the long run using these metrics because ,traditionally, these financial justifications have worked against them resulting in missed expectations. If you don’t believe me, compare the turnover of CIOs to CEOs or CFOs in the business world. Now, with Cloud computing on the tongues of every executive who can pick up a copy of Forbes or Time, its critical for IT executives to ensure they select an appropriate metric that allows them to illustrate and accurately calculate value for investment dollars; especially in a down economy.
Total Service Cost (TSC)
The metric I recommend is total service cost (TSC). Basically stated as a formula, TSC is:
(Cost of Infrastructure + Cost of Operations + Cost of Software + Cost of Risk) – Billed Usage = TSC
Cost of Infrastructure: Whether you’re considering public, private or hybrid Cloud solutions, there is a computable cost associated with the infrastructure. Of course, computing this for private Cloud configurations can be more difficult, especially if you have not established effective metering solutions. That is, without effective metering solutions, an accurate TSC requires that you allocate and track percentages of the overall costs for the used infrastructure, inclusive of utilities, network, storage, etc. across services. Additionally, while it would be simpler to estimate, the problem with estimates is that it will impact your ability to accurately determine costs for underutilized infrastructure; a key component in deciphering Cloud alternatives.
Cost of Operations: These costs must include all resources involved with delivery of a service including management of the information, such as information quality and assurance, in addition to traditional network, systems and data center operations.
This is one area where service oriented architecture (SOA) is an important design paradigm for Cloud computing. Those who recognized the strategic value of SOA to specify and manage services will really see simplification in computing operational costs on a service-basis over those that have delegated SOA to being a technical effort focused on Web services and ESBs
Ultimately, SOA is a fractal pattern. Services are made up of other services. Packaging and costing those services simplifies the overall costing models for aggregate service; in this case help desk, monitoring, system administration, etc. By packaging and costing each of the operational services appropriately, you can amortize the cost of the operational services across all the composite services that are dependent upon them.
5 Problems with SaaS Security from CIO.com http://bit.ly/a1cojT
Due dilligence processes for Cloud Computing by Information Security Magazine http://bit.ly/9UtSxT
Are you interested in the entrepreneurial opportunity to work for a startup and make a change in the industry, and will give you the experience and exposure you need to build your career? If you are, then Checklist 2.0 (www.checklist20.com) – Organized Best Practices – is the firm for you. Checklist 2.0 is a web platform to help clients meet the challenges and opportunities of the global IT marketplace in the areas of audit, compliance, security and benchmarking.
At Checklist 2.0, you will be part of a learning culture, where teamwork and collaboration are encouraged, entrepreneurship is rewarded, and diversity is respected and valued. We offer a flexible career progression model that allows for a variety of challenging opportunities throughout your career. We provide unparalleled coaching, mentoring, and career opportunities; and state of the art technology-driven methodologies to help you provide quality service to our global customer base.
Our practice collaborates with subject matter experts, auditors, assurance professionals, standard bodies(ISO, ISACA, ISSA etc.,) so that the best practices they have identified is shared with the community for peer-review and reference. When we fulfill our role as organizing and peer-reviewing platform, by standing firm on up-to-date and specific practices, we have a direct impact on how well the world’s IT system functions. Join us and we will help you implement a successful career strategy, as you explore the many career opportunities in Checklist 2.0 practices.
The Checklist 2.0 services function requires an understanding of an organization’s objectives, risks, risk management priorities, regulatory environment, and the diverse needs of critical stakeholders.
We can assist organizations that require help improving the quality and effectiveness of their internal audit processes in a number of ways. First, we can advise and assist in the development of internal audit and risk management methodologies, including assessing whether the internal audit function is delivering effectively to stakeholders. Second, we can provide internal audit resourcing solutions, including full outsourcing or complementing in-house functions with specialist skills or geographical coverage. Third, we can support internal audit functions with software to enhance and support their work. In addition, we can develop training for internal auditors using our extensive peer-reviewed knowledgebase to create highly-tailored solutions.
* Play a role as an intern in Checklist 2.0’s audit plan, best practices, checklist development practice assisting with the development, technical editing and publishing for different technology topics.
* Responsibilities may include, but are not limited to, the following: assisting with the development of risk assessments and audit plans; assisting in writing, moderating, editing from different public resources like ISACA.org, NIST.gov and other leading web sources to Checklist 2.0 format to address IT auditors’ need.
* Passion for IT, Security, Audit and Writing; demonstrate creative thinking and individual initiative.
* Cultivate teamwork dynamics through working as a team member: understand personal and team roles; contribute to a positive working environment by building solid relationships with team members; proactively seek guidance, clarification and feedback.
* Demonstrate flexibility in prioritizing and completing tasks; communicate potential conflicts to a supervisor.
* Interest in all aspects of internal auditing and a desire to pursue a career in IT auditing and Security.
* Ability to demonstrate strong problem solving skills and the ability to prioritize and handle multiple tasks.
* Ability to interact with various levels of client and firm management in both written and verbal form.
* Ability to self-motivate and take responsibility for personal growth and development.
* Flexibility and desire to travel, as client assignments require
* Pursuing a Bachelor degree in computer science or engineering and passion for writing.
Checklist 2.0 – Organized Best Practices – is a collaborative and customizable web platform for generating up-to-date and peer-reviewed audit plans, audit programs, and best practices in different technology domains. Checklist 2.0 content is contributed to, and organized by, trusted experts and authoritative sources around the world. Checklist 2.0 covers a diverse range of requirements including SOX, HIPAA, PCI-DSS, ISO etc.
Cloud Computing Contract and IT audit – very in depth! – http://bit.ly/9x4ulz
Hybrid Cloud Computing Best Practices & Audit Checklist
Channel Insider Daily
As more businesses look to move IT operations to the cloud, many of them are weighing the benefits of moving just some functions while keeping others in house. The approach provides companies with control over mission critical functions and legacy operations while moving more portable applications to a cloud model.
That’s why many companies are considering building a hybrid cloud computing strategy to meet their needs with a mix of public cloud and private cloud platforms. But having a mix of public cloud and private cloud platforms and applications potentially has its own pitfalls. Intel (www.intel.com) and Univa UD (www.univaud.com) have devised a list of best practices when developing a hybrid cloud computing architecture.
Here’s a look at what companies should consider:
* Examine Your Goals
The “why” is a very important first step. Examine your motivations for moving to a hybrid cloud computing model. What are you getting out of it? Cost benefits? Compliance needs? Performance improvements?
* Measure Success
Before the implementation begins, set your goals and decide how you are going to measure success. Also ensure that the success can be tested and that service level agreements (SLAs) can be met.
* Evaluate Multiple Providers
Don’t choose the first cloud provider that comes along. A cloud provider should have a comfortable level of maturity in the market and have a detailed product and service roadmap they’re willing to share. For more on due diligence of cloud providers, see the Is That Cloud Safe? Due Diligence for Cloud Apps” slideshow.
* What’s an SLA?
Service-level agreements (SLAs) can be tricky things, and customers need to understand what they are, what’s detailed in the agreement and what will happen if the agreement is not met. Compare SLAs of different providers to get an idea of what’s important and, more importantly, what will serve best your business’ needs.
* Scenario Planning
In making sure you’re ready for the hybrid cloud from a technical perspective, businesses need to do scenario planning based on their individual needs. Should it be static or dynamic? Is bursting important, and do you want to be able to burst on demand?
* Network Design
Switching to a hybrid computing infrastructure means doing a network assessment and (likely) reconfiguring your existing network so it will serve your business’ needs effectively in the cloud world.
* Workload Assessment
With three options (private cloud, public cloud and local) available for running workload applications, each workload has to be assessed and then placed in the most appropriate place. Keep in mind corporate policies and compliance regulations when placing workloads. Security should be a top concern.
* Workload Readiness
Workloads will need to be standardized so that they can be used with the vendors you are working with. Compatibility with your provider’s APIs and making sure there is appropriate load balancing available are also both key to ensuring a smooth transition to hybrid cloud computing.
* Risk Analysis
Although cloud vendors assure customers that security is top of mind, businesses still need to do appropriate risk analysis and then make plans for backups or alternatives. One key thing for customers is ensuring they maintain control of their data, so read the fine print.
Changes in how you want to run your cloud applications should be expected, so ensure that applications are mobile between the public and private clouds, as well across bare metal and virtual modes. Migrating workloads from internal to external and back (as well as between providers) should be seamless. Avoid getting locked into a single provider.
What’s so good about the cloud if you’re stuck making changes manually? Automating workload placements, security of the network and nodes, and provisioning and management will reduce headaches and make your hybrid cloud infrastructure run much more seamlessly.
Forrester Research, Inc. Business Cloud Computing Audit Checklist
Simplifying cloud computing security audit procedures
Beth Cohen, Contributor 04.01.2010 (edited for brevity)
By now, everyone has heard that cloud computing is changing the world, and there is no question that it will. However, as with any new technology model or innovation, there are many bumps and detours along the way.
The channel customers have spoken and, far and away, the number one reason they consistently hold back on cloud service deployment is their perception that the cloud is insecure.
Ultimately, as a trusted advisor, your goal is to identify and articulate to your customers their cloud security challenges and provide solutions that both save your customers’ money, and build stronger client relations.
The top cloud security challenges for businesses can be categorized into five broad areas:
* Business: A lack of integration between cloud vendors, limited data portability and vendor lock-in (isn’t that what the cloud is supposed to avoid?!) is giving business executives and IT departments heartburn. Deciding what data must stay in-house and what data can migrate to the cloud can be complex and fraught with hidden gotchas.
As a trusted advisor, you can guide your customer through the audit process, pointing out potential problems and solutions. Look for systems that are already touching outside customers and networks — a customer service system or Web portal is a good example of a natural cloud migration.
* Financial: Companies need to determine if it makes more financial sense to purchase cloud services or build customized systems in house. Often companies underestimate the risks and cost of data loss, or the cost of mitigating and preventing the occurrence in the first place. With your knowledge of the real business cost of data loss, you can educate your clients about their level of exposure.
* Legal: Companies need to determine the level of archiving and protection they need to provide for potential legal actions and e-discovery requests. In this day and age, it is not enough to say that the files are no longer accessible; companies can and will be held liable for the data recovery. As a channel partner, you can provide services, like information lifecycle management (ILM) or data privacy audits to ensure your clients are fully protected in the cloud.
* Regulatory: HIPAA, state data protection laws, SOX and a myriad of other regulations affect your clients differently depending on their business and industry sector. Regulations are rapidly catching up with cloud technology, so understanding the often complex and sometimes contradictory regulatory environments are valuable skills to help your clients navigate the traitorous waters of using cloud services in a regulated industry. This is particularly true for PCI DSS and banking regulatory compliance.
* Technical: Cloud vendors are not always forthcoming about the details of their services, particularly related to how customer data is authenticated, secured and protected. Understanding the technology behind cloud services is often a mystery to even the most sophisticated customer. As a channel partner who really does know cloud architectures, your guidance is invaluable for clients who need to protect their data no matter where it is located.
Cloud security audit best practices checklist:
* Perform a data flow and privacy assessment: Look at where the client’s data is and how it flows through the organization. Is it vulnerable at any point? Is it all internal, or is some data already out on the cloud?
* Probe customer data for its suitability for the cloud: Rank the data into three pools: belongs on the cloud, does not belong on the cloud, and might belong on the cloud. For example, your corporate financial statements probably do not belong on the cloud, while your customer service systems and archives (as long as they are proper encrypted) do.
* Evaluate the client’s application portfolio: Evaluate the portfolio from both the business and data security perspectives. Which applications are available on the cloud and which ones are likely to be available in the future? Can some of the specialized applications currently in use be migrated to the cloud relatively easily or will they require extensive configuration and modification to business processes?
* Audit the existing IT infrastructure, servers and networks: Look for potential cloud migration opportunities. Help your client understand what systems will benefit from moving to the cloud and which ones will not. Some good targets for cloud migration would be a client’s email system or CRM system. Both of these systems are not only essential, but there are quite a few relatively mature cloud options available to choose from.
* Review cloud vendor contracts: Watch for potential service-level discrepancies and make sure your customer understands the relative responsibilities of each party.
* Help your customer develop a contingency plan: If a cloud vendor relationship does not work out, do not forget to include data extraction and portability as a key design goal to minimize vendor lock-in.
Filesystem block size rarely enters the sparkling dialog at your noontime geekfest where movie one-liners and song lyrics replace actual conversation but today is different. The ticking of thumbs halts in mid-text when someone at the table opens up an intellectual volley with, “Have you ever seen the error that there isn’t enough space on the filesystem for the selected operation in Virtual Center?” The puzzled faces stare back as if someone had just announced that iPads are on sale for half price. But, before you, or they, have a chance to react to this obviously simple problem of insufficient disk space, the problem isn’t insufficient disk space.
All the years of accepting the default block size when formatting new disks pass before your mind’s eye with curiosity. Don’t decide at this late date that all your efforts were magnetic dust in the wind. All is not lost. Nor are you sentenced to suffer more painful lyric references.
Hitting the Wall
Your SAN Administrator presents you with a fresh 1TB LUN for your VMware environment. You create a new datastore by accepting all the defaults, including the default block size of 1MB. In a few minutes, your 1TB LUN takes on its new role as VMFS-formatted storage for all your space hungry guest systems.
During your first physical to virtual (P2V) migration, you receive a failure notice that looks similar to: “Failed to create virtual disk: There is not enough space on the file system for the selected operation.”
The entire disk capacity of the physical system is 500GB, so how could this happen? You re-examine the physical system’s disk layout and find that, indeed, you have a total capacity of 500GB.
Physical System’s Disk Layout
* C: – 30GB
* E: – 400GB
* F: – 70GB
You attempt the P2V migration a few more times and carefully consider each option as you step through the wizard. However, it fails each time with the same error message.
The solution requires that you take notice of what’s happening when you step through the datastore creation wizard. When you reach the Disk/LUN Formatting step, take pause and examine your choices.
The VMFS-3 Virtual Blockade
* 256 GB, Block size: 1 MB
* 512 GB, Block size: 2 MB
* 1024 GB, Block size: 4 MB
* 2048 GB, Block size: 8 MB
Since your E: drive is 400GB in size, and you didn’t resize it during the P2V migration, then you must select a block size of 2MB or larger. Once you do this, the P2V migration will proceed normally. The block sizes and file sizes are limitations of the VMFS-3 filesystem. Yes, limitations. Armed with the knowledge that two whole terabytes should be a single file size large enough for anyone*, it’s also well-known that databases know no such limitations.
VMFS-4 won’t have this limitation. Its maximum file size should follow the ext4 filesystem standard of 16TB.
There are two glaring observations that I can make here. First, it would be great if developers would write errors that are more explicit and more helpful. Generic errors only tend to frustrate technical people into Googling for assistance leading to hours or days of wasted troubleshooting time. Second, shouldn’t VMware move on to VMFS-4 (ext4) or raise the default block size dynamically to the maximum possible for a disk? Feel free to return to your system administrator lunchtime mayhem, chicken strips, fries and Diet Dr. Pepper.
* Sounds a bit like that unattributable quotation, “640K RAM ought to be enough for anyone.” And, like the person who spoke that one into existence, I’ll deny that I ever said it.
Kenneth Hess is a Linux evangelist and freelance technical writer on a variety of open source topics including Linux, SQL, databases, and web services. Ken can be reached via his website at http://www.kenhess.com. Practical Virtualization Solutions by Kenneth Hess and Amy Newman is available now.