Wednesday, May 26, 2010

CloudFucius Listens: F5’s Cloud Computing Solutions

Konfuzius-1770 About a month ago, CloudFucius Hollered about F5’s On-Demand IT solutions which offers a holistic approach to enable a common cloud architectural model—regardless of where IT resources actually reside.  That post contained links to the various whitepapers specifically covering F5’s Cloud Computing Solutions which were designed to unleash the true potential of virtualization and cloud computing.  It’s a comprehensive approach that integrates disparate technologies for application delivery, security, optimization, data management, and infrastructure control—all critical technologies needed to realize the true flexibility and agility of virtualization and cloud computing solutions.  CloudFucius is now excited to announce the availability of those whitepapers in audio format. 

For your listening pleasure:

Audio White Paper - The F5 Powered Cloud - How F5 solutions power a cloud computing architecture capable of delivering highly-available, secure, and optimized on-demand application services.  Read the full pdf of the whitepaper here.

Audio White Paper - The Optimized and Accelerated Cloud - As more organizations begin moving applications into the cloud, congestion will become an increasingly critical issue. F5 offers solutions for optimizing and accelerating applications in the cloud, making them fast and available wherever they reside.  Read the full pdf of the whitepaper here.

Audio White Paper - Securing the Cloud - Cloud computing has become another key resource for IT deployments, but there is still fear of securing applications and data in the cloud. With F5 devices, you can keep your most precious assets safe, no matter where they live.  Read the full pdf of the whitepaper here

Audio White Paper - Cloud Balancing: The Evolution of Global Server Load Balancing - Cloud balancing evolves global server load balancing from traditional routing options based on static data to context aware distribution across cloud-based services.  Read the full pdf of the whitepaper here.

Audio White Paper - Availability and the Cloud - Cloud computing offers IT another tool to deliver applications. While enticing, challenges still exist in making sure the application is always available. F5’s flexible, unified solutions ensure high availability for cloud deployments.  Read the full pdf of the whitepaper here

And one from Confucius: Learning without thought is labor lost; thought without learning is perilous.

ps

The CloudFucius Series: Intro, 1, 2, 3, 4, 5, 6, 7

Technorati Tags: F5, infrastructure 2.0, integration, cloud connect, Pete Silva, security, business, education, technology, application delivery, intercloud, cloud, context-aware, infrastructure 2.0, automation, web, internet, blog

twitter: @psilvas

Digg This

Tuesday, May 25, 2010

CloudFucius Combines: Security and Acceleration

Konfuzius-1770 CloudFucius has explored Cloud Security with AAA Important to the Cloud and Hosts in the Cloud along with wanting An Optimized Cloud.  Now he desires the sweet spot of Cloud Application Delivery combining Security and Acceleration.  Few vendors want to admit that adding a web application security solution can also add latency, which can be kryptonite for websites.  No website, cloud or otherwise, wants to add any delay to users’ interaction.  Web application security that also delivers blazing fast websites might sound like an oxymoron, but not to CloudFucius.  And in light of Lori MacVittie’s Get your SaaS off my cloud and the accompanying dramatic reading of, I’m speaking of IaaS and PaaS cloud deployments, where the customer has some control over the applications, software and systems deployed.

It’s like the old Reese’s peanut butter cups commercial, ”You’ve stuck your security in our acceleration.”  “Yeah, well your acceleration has broken our security.”  Securing applications and preventing attacks while simultaneously ensuring consistent, rapid user response, is a basic web application requirement.  Yet web application security traditionally comes at the expense of speed.  This is an especially important issue for online retailers, where slow performance can mean millions of dollars in lost revenue and a security breach can be just as devastating as more than 70 percent of consumers say they would no longer do business with a company that exposed their sensitive information.

Web application performance in the cloud is also critical for corporate operations, particularly for remote workers, where slow access to enterprise applications can destroy productivity.  As more applications are being delivered through a standard browser from the cloud, the challenge of accelerating web applications without compromising security grows.  This has usually required multiple dedicated units either from the customer or provider, along with staff to properly configure and manage them.  Because each of these “extra” devices has its own way of proxying transactions, packets can slow to a crawl due to the extra overhead of TCP and application processing.  Fast and secure in a single, individually wrapped unit does seem like two contrary goals.

The Security Half
As the cloud has evolved, so have security issues.  And as more companies become comfortable deploying critical systems in the cloud, solutions like web application firewalls are a requirement, particularly for regulatory compliance situations.  Plus, as the workforce becomes more mobile, applications need to be available in more places and on more devices, adding to the complexity of enforcing security without impacting productivity.  Consider that a few years back, the browser’s main purpose was to surf the net.  Today, browser usage is a daily tool for both personal and professional needs.  In addition to the usual web application activities like ordering supplies, checking traffic, and booking travel, we also submit more private data like health details and payroll information.  The browser acts as a secret confidant in many areas of our lives since it transmits highly sensitive data in both our work and social spheres.  And it goes both ways; while other people, providers, sites, and systems have our sensitive data, we may also be carrying someone else’s sensitive data on our own machines.  Today, the Could and really the Internet at large is more than a function of paying bills or getting our jobs done—it holds our digital identity for both work and play.  And once a digital identity is out there, there’s no retracting it.  We just hope there are proper controls in place to keep it secret and safe.

The Acceleration Half
For retail web applications and search engines, downtime or poor performance can mean lost revenue along with significant, tangible costs.  A couple years ago, the Warwick Business School published research that showed it can be more than $500,000 in lost revenue for an unplanned outage lasting just an hour.  For financial institutions, the loss can be in the several million dollar range.  And downtime costs more than just lost revenue.  Not adhering to a service level agreement can incur remediation costs or penalties and non-compliance with certain regulatory laws can result in fines.  Additionally, the damage to a company’s brand reputation—whether it’s from an outage, poor performance, or breach—can have long-lasting, detrimental effects to the company.

These days, many people now have high-speed connections to the home accessing applications in the cloud.  But applications have matured and now offer users pipe-clogging rich data like video and other multi-media.  If the website is slow, users will probably go somewhere else.  It happens all the time.  You type in a URL only to watch the browser icon spin and spin. You might try to reload or retype, but more often, you simply type a different URL to a similar site.  With an e-commerce site, poor performance usually means a lost sale because you probably won’t wait around if your cart doesn’t load quickly or stalls during the secure check-out process.  If it’s a business application and you’re stuck with a sluggish site, then that’s lost productivity, a frustrated user and can result in a time-consuming trouble ticket for IT.  When application performance suffers, the business suffers.

What’s the big deal?
Typically, securing an application can come at the cost of end-user productivity because of deployment complexity.  Implementing website security—like a web application firewall—adds yet another mediation point where the traffic between the client and the application is examined and processed.   This naturally increases the latency of the application especially in the cloud, since the traffic might have to make multiple trips.  This can become painfully apparent with globally disbursed users or metered bandwidth agreements but the solution is not always simple. Web application performance and security administration can cross organizational structures within companies, making ownership splintered and ambiguous.  Add a cloud provider to the mix and the finger pointing can look like Harry Nilsson's The Point! (Oh how I love pulling out obscure childhood references in my blogs!!)

The Sweet Spot
Fortunately, you can integrate security and acceleration into a single device with BIG-IP Local Traffic Manager (LTM) and the BIG-IP LTM Virtual Edition (VE).  By adding the BIG-IP Application Security Manager (ASM) module and the BIG-IP WebAccelerator module to BIG-IP LTM, not only are you able to deliver web application security and acceleration, but the combination provides faster cloud deployment and simplifies the process of managing and deploying web applications in the cloud.  This is a true, internal system integration and not just co-deployment of multiple proxies on the same device.  These integrated components provide the means to both secure and accelerate your web applications with ease.  The unified security and web application acceleration takes a single platform approach that receives, examines, and acts upon application traffic as a single operation, in the shortest possible time and with the least complexity. The management GUI allows varying levels of access to system administrators according to their roles.  This ensures that administrators have appropriate management access without granting them access to restricted, role-specific management functions.  Cloud providers can segment customers, customers can segment departments.

The single-platform integration of these functions means that BIG-IP can share context between security and acceleration—something you don’t get with multiple units and enables both the security side and the acceleration side to make intelligent, real-time decisions for delivering applications from your cloud infrastructure.  You can deploy and manage a highly available, very secure, and incredibly fast cloud infrastructure all from the same unified platform that minimizes WAN bandwidth utilization, safeguards web applications, and prevents data leakage, all while directing traffic to the application server best able to service a request.  Using the unified web application security and acceleration solution, a single proxy secures, accelerates, optimizes, and ensures application availability for all your cloud applications.

And one from Confucius: He who will not economize will have to agonize.

ps

The CloudFucius Series: Intro, 1, 2, 3, 4, 5, 6

Related:
Technorati Tags: F5, infrastructure 2.0, integration, cloud connect, Pete Silva, security, business, education, technology, application delivery, intercloud, cloud, context-aware, infrastructure 2.0, automation, web, internet, blog, law
twitter: @psilvas
Digg This

Tuesday, May 18, 2010

CloudFucius Inspects: Hosts in the Cloud

Konfuzius-1770 So much has been written about all the systems, infrastructure, applications, content and everything else IT related that’s making it’s way to the cloud yet I haven’t seen much discussion (or maybe I just missed it) about all the clients connecting to the cloud to access those systems.  Securing those systems has made some organizations hesitate in deploying IT resources in the cloud whether due to compliance, the sensitivity of the data, the shared infrastructure or simply persuaded by survey results.  Once a system is ‘relatively’ secure, how do you keep it that way when the slew of potentially dangerous, infected clients connect?  With so many different types of users connecting from various devices, and with a need to access vastly different cloud resources, it’s important to inspect every requesting host to ensure both the user and the device can be trusted.  Companies have done this for years with remote/SSL VPN users who request access to internal systems – is antivirus installed and up to date, is a firewall enabled, is the device free of malware and so forth.  Ultimately, the hosts are connecting to servers housed in some data center and all the same precautions you have with your own space should be enforced in the cloud.


Since cloud computing has opened application deployment to the masses, and all that’s required for access is *potentially* just a browser, you must be able to detect not only the type of computer (laptop, mobile device, kiosk, etc.) but also its security posture. IDC predicts that ‘The world's mobile worker population will pass the one billion mark this year and grow to nearly 1.2 billion people – more than a third of the world's workforce – by 2013’  With so many Internet-enabled devices available; a Windows computer, a Linux box, an Apple iteration, a mobile device and anything else with an IP address, they could all be trying to gain access to your cloud environment at any given moment.  It might be necessary to inspect each of these before granting users access in order to make sure it’s something you want to allow.  If the inspection fails, how should you fix the problem so that the user can have some level of access?  If the requesting host is admissible, how do you determine what they are authorized to access?  And, if you allow a user and their device, what is the guarantee that nothing proprietary either gets taken or left behind?  The key is to make sure that only “safe” systems are allowed to access your cloud infrastructure, especially if it contains highly sensitive information and context helps with that.

One of the first steps to accomplishing this is to chart usage scenarios. Working in conjunction with the security policy, it is essential to uncover the usage scenarios and access modes for the various types of users and the many devices that they might be using.  The chart will probably vary based on your company’s and/or website’s Acceptable Use Policy, but this exercise gets administrators started in determining the endpoint plan.  Sounds a lot like a remote access policy, huh, with one exception.  Usually there is a notion of ‘trusted’ and ‘un-trusted’  with remote access.  If a user requests access from a corporate issued laptop, often that’s considered a trusted device since there is something identifiable to classify it as an IT asset.  These days, with so many personal devices entering the cloud, all hosts should be considered un-trusted until they prove otherwise.  And as inter-clouds become reality, you’ll need to make sure that a client coming from someone else’s infrastructure abides by your requirements.  Allowing an infected device access to your cloud infrastructure can be just as bad as allowing an invalid user access to proprietary internal information.  This is where endpoint security checks can take over. Endpoint security prevents infected PCs, hosts, or users from connecting to your cloud environment.  Automatic re-routing for infected PCs reduces Help Desk calls and prevents sensitive data from being snooped by keystroke loggers and malicious programs.

Simply validating a user is no longer the starting point for determining access to cloud systems; the requesting device should get the first review.  Pre-access checks can run prior to the actual logon (if there is one) page appearing, so if the client is not in compliance, they won’t even get the chance to enter credentials.  These checks can determine if antivirus or firewall is running, if it is up-to-date, and more.  Systems can direct the user to a remediation page for further instructions to gain access. It’s easy to educate the user as to why the failure occurred and relay the possible steps to resolve the problem.  For example: “We noticed you have antivirus installed but not running. Please enable your antivirus software for access.”  Or, rather than deny logon and communicate a detailed remedy, you could automatically send them to a remediation website designed to correct or update the client’s software environment, assuring policies required for access are satisfied without any user interaction.  Inspectors can look for certain registry keys or files that are part of your corporate computer build/image to determine if this is a corporate asset and thus, which system resources are allowed.  Pre-access checks can retrieve extended Windows and Internet Explorer info to ensure certain patches are in place.  If, based on those checks, the system finds a non-compliant client but an authorized user; you might be able to initiate a secure, protected, virtual workspace for that session.
As the ever-expanding cloud network grows, the internal corporate resources require the most protection as it’s always been.  Most organizations don’t necessarily want all users’ devices to have access to all resources all the time.  Working in conjunction with the pre-access sequence, controllers can gather device information (like IP address or time of day) and determine if a resource should be offered.  A protected configuration measures risk factors using information collected by the pre-access check; thus, they work in conjunction.  For example, Fake Company, Inc. (FCI) has some contractors who need access to Fake Company’s corporate cloud.  While this is not an issue during work hours, FCI does not want them accessing the system after business hours.  The controller can check the time if a contractor tries to log on at 2 AM; it knows the contractor’s access is only available during FCI’s regular business hours and can deny access.

Post-access actions can protect against sensitive information being “left” on the client.  The controller can impose a cache-cleaner to eliminate any user residue such as browser history, forms, cookies, auto-complete information, and more.  For systems unable to install a cleanup control, you can block all file downloads to avoid the possibility of the inadvertent left-behind temporary file—yet still allow access to needed cloud applications.  These actions are especially important when allowing non-recognized machines access without wanting them to take any data with them after the session.

In summary: First, inspect the requesting device; second, protect resources based on the data gathered during the check; third, make sure no session residue is left behind.  Security is typically a question of trust. Is there sufficient trust to allow a particular user and a particular device full access to enterprise cloud resources?  Endpoint security gives the enterprise the ability to verify how much trust and determine whether the client can get all the cloud resources, some of the cloud resources, or just left in the rain.
And one from Confucius: When you know a thing, to hold that you know it; and when you do not know a thing, to allow that you do not know it - this is knowledge.

ps

The CloudFucius Series: Intro, 1, 2, 3, 4, 5

Related:
Technorati Tags: F5, infrastructure 2.0, integration, cloud connect, Pete Silva, security, business, education, technology, application delivery, intercloud, cloud, context-aware, infrastructure 2.0, automation, web, internet, blog, law

twitter: @psilvas
Digg This

Tuesday, May 11, 2010

CloudFucius Wants: An Optimized Cloud

Konfuzius-1770 Although networks have continued to improve over time, application traffic has increased at a rapid rate in recent years.  Bandwidth-efficient client server applications have been replaced with bandwidth-demanding web applications.  Where previous generation client server transactions involved tens of kilobytes of data, rich web based portal applications can transfer hundreds of kilobytes per transaction and with the explosion of social media and video, megabytes per transaction is not uncommon.  Files attached to email and accessed across remote file shares have also increased in size.  Even data replication environments with dedicated high speed links have encountered bandwidth challenges due to increases in the amount of data requiring replication.  Our bandwidth hungry society, with people now watching videos right from their mobile devices can have both a financial and technical impact on cloud infrastructures needed to deliver those pieces of content.

Attempts to apply compression at the network level have been relatively bland.  Routers have touted compression capabilities for years, yet very few organizations enable this capability since it usually entails either an all-on or all-off mode and can add overhead both in terms of additional load placed on the routers and the additional latency due to the time it takes for the router to compress each packet.  A key factor in compressing traffic is how the data is presented to the compression routine.  All compression routines achieve greater levels of compression when dealing with homogenous data.  When presented with heterogeneous data, such as a collection of packets from multiple different protocols, compression ratios fall dramatically.

The primary problem with packet based compression is that it mixes multiple data types together when compressing.  They usually buffer packets destined for a remote network, compress them either one at a time or as a group and then send.  The process is then reversed on the other end.  Packet based compression systems can have other problems. When compressing packets, these systems must choose between writing small packets to the network or performing additional work to aggregate and encapsulate multiple packets.  Neither option produces optimal results.  Writing small packets to the network increases TCP/IP header overhead and aggregating and encapsulating packets adds encapsulation headers to the stream.

clip_image002

Packet Compressor

Instead, you might want to investigate a WAN Optimization solution that operates at the session layer.  This allows it to apply compression across a completely homogenous data set while addressing all application types. This results in higher compression ratios than comparable packet based systems.

clip_image004

Session Compressor

By operating at the session layer, packet boundary and re-packetization problems are eliminated. You can easily find matches in data streams which at layer 3 may be many bytes apart but at layer 5 are contiguous. System throughput is also increased when compression is performed at the session layer through the elimination of the encapsulation stage.

Achieving a high compression ratio is only part of the performance puzzle.  In order to improve performance, the compressor must actually increase network throughput. This requires that the compressor be able to achieve greater than line speed since as network speeds increase, the compressor might not be able to fully utilize the available bandwidth. For optimal performance you want to apply the best compression ratio for the bandwidth available. You are paying for the bandwidth and having a half-empty pipe can be costly.

In cloud deployments, Selective Data Deduplication (SDD) can have a significant impact.  SDD is designed to identify and remove all repetitive data patterns on the WAN.  As data flows through the WANOp appliances, they record the byte patterns and build synchronized dictionaries.  Should an identical pattern of bytes traverse the WAN a second time, the WANOp device near the sender replaces the byte pattern with a reference to its copy in the dictionary.  This can have huge benefits particularly when deploying virtual machine images or moving applications from the local data center to cloud peering-points.  Even though virtual machine images can be quite large (in the tens of Gigabytes), they are often comprised of a significant amount of redundant data like the underlying OS and are optimal candidates for SDD processing.

After SDD has removed all previously transferred byte patterns, you can apply a second class of data reduction routines called Symmetric Adaptive Compression (SAC).  While SDD is optimized to enhance repeat transfer performance, SAC is designed to improve first transfer performance through the use of advanced encoding techniques and dictionaries optimized for very small repetitive patterns.  SAC constantly adapts to changing network conditions and application requirements mid-stream.  During periods of high congestion, SAC increases compression levels to reduce congestion and network queuing delay.  During periods of low congestion, SAC reduces compression levels to minimize compression induced latency.  By examining every packet and adjusting the codec based on the flow, the adaptive nature of SAC ensures that the optimal compression strategy is applied and enables network administrators to deploy compression without fear of degrading application performance.

Like SDD, SAC can benefit Cloud deployments as well.  The elasticity of the cloud requires that the infrastructure be just as dynamic.  By responding to the ever changing network conditions of both the cloud and end user, SAC can make certain that you are using your bandwidth efficiently and quickly delivering the needed content to any user around the globe.

And one from Confucius: Life is really simple, but we insist on making it complicated.

ps

The CloudFucius Series: Intro, 1, 2, 3, 4

Technorati Tags:F5, Replication, Web Optimization, Data Deduplication, Pete Silva, F5, technology, application delivery, intercloud, cloudinfrastructure 2.0

twitter: @psilvas

Digg This

Thursday, May 6, 2010

Fast Application Access with Jonathan George - F5 at Interop

F5's Jonathan George presents Fast Application Access at Interop2010. Web access, Global delivery, cloud deployments, remote users are just some of the areas covered in this short presentation.

ps

Technorati Tags: F5, infrastructure 2.0, integration, collaboration, standards, cloud connect, Pete Silva, F5, security, business, education, technology, application delivery, intercloud, cloud, context-aware, infrastructure 2.0, automation, web, internet, blog

twitter: @psilvas

Digg This

Wednesday, May 5, 2010

CloudFucius Ponders: High-Availability in the Cloud

Konfuzius-1770 According to Gartner, “By 2012, 20 percent of businesses will own no IT assets.”  While the need for hardware will not disappear completely, hardware ownership is going through a transition: Virtualization, total cost of ownership (TCO) benefits, an openness to allow users run their personal machines on corporate networks, and the advent of cloud computing are all driving the movement to reduce hardware assets.  Cloud computing offers the ability to deliver critical business applications, systems, and services around the world with a high degree of availability, which enables a more productive workforce.  No matter which cloud service — IaaS, PaaS, or SaaS (or combination thereof) — a customer or service provider chooses, the availability of that service to users is paramount, especially if service level agreements (SLAs) are part of the contract.  Even with a huge cost savings, there is no benefit for either the user or business if an application or infrastructure component is unavailable or slow.

As hype about the cloud has turned into the opportunity for cost savings, operational efficiency, and IT agility, organizations are discussing, testing, and deploying some form of cloud computing.  Many IT departments initially moved to the cloud with non-critical applications and, after experiencing positive results and watching cloud computing quickly mature, are starting to move their business critical applications, enabling business units and IT departments to focus on the services and workflows that best serve the business.  Since the driver for any cloud deployment, regardless of model or location, is to deliver applications in the most efficient, agile, and secure way possible, the dynamic control plane of cloud architecture requires the capability to intercept, interpret, and instruct where the data must go and must have the necessary infrastructure, at strategic points of control, to enable quick, intelligent decisions and ensure consistent availability.

The on-demand, elastic, scalable, and customizable nature of the cloud must be considered when deploying cloud architectures.  Many different customers might be accessing the same back-end applications, but each customer has the expectation that only their application will be properly delivered to users.  Making sure that multiple instances of the same application are delivered in a scalable manner requires both load balancing and some form of server virtualization. An Application Delivery Controller (ADC) can virtualize back-end systems and can integrate deeply with the network and application servers to ensure the highest availability of a requested resource.  Each request is inspected using any number of metrics and then routed to the best available server.  Knowing how an ADC can enhance your application delivery architecture is essential prior to deployment. Many applications have stellar performance during the testing phase, only to fall apart when they are live. By adding a Virtual ADC to your development infrastructure, you can build, test and deploy your code with ADC enhancements from the start.

With an ADC, load balancing is just the foundation of what can be accomplished.  In application delivery architectures, additional elements such as caching, compression, rate shaping, authentication, and other customizable functionality, can be combined to provide a rich, agile, secure and highly available cloud infrastructure.  Scalability is also important in the cloud and being able to bring up or take down application instances seamlessly — as needed and without IT intervention — helps to prevent unnecessary costs if you’ve contracted a “pay as you go” cloud model.  An ADC can also isolate management and configuration functions to control cloud infrastructure access and keep network traffic separate to ensure segregation of customer environments and the security of the information.  The ability of an ADC to recognize network and application conditions contextually in real-time, as well as its ability to determine the best resource to deliver the request, ensures the availability of applications delivered from the cloud.

Availability is crucial; however, unless applications in the cloud are delivered without delay, especially when traveling over latency-sensitive connections, users will be frustrated waiting for “available” resources.  Additional cloud deployment scenarios like disaster recovery or seasonal web traffic surges might require a global server load balancer added to the architecture.  A Global ADC uses application awareness, geolocation, and network condition information to route requests to the cloud infrastructure that will respond best and using the geolocation of users based on IP address, you can route the user to the closest cloud or data center.  In extreme situations, such as a data center outage, a Global ADC will already know if a user’s primary location is unavailable and it will automatically route the user to the responding location.

Cloud computing, while still evolving in all its iterations, can offer IT a powerful alternative for efficient application, infrastructure, and platform delivery.  As businesses continue to embrace the cloud as an advantageous application delivery option, the basics are still the same: scalability, flexibility, and availability to enable a more agile infrastructure, faster time-to-market, a more productive workforce, and a lower TCO along with happier users.

And one from Confucius: The man of virtue makes the difficulty to be overcome his first business, and success only a subsequent consideration.

ps

The CloudFucius Series: Intro, 1, 2, 3

Technorati Tags: F5, infrastructure 2.0, integration, collaboration, standards, cloud connect, Pete Silva, F5, security, business, education, technology, application delivery, intercloud, cloud, context-aware, infrastructure 2.0, automation, web, internet, blog

twitter: @psilvas

Digg This