Cloud Computing: Implementation, Management, and Security. Front Cover. John W. Rittinghouse, James F. Supporting a multitenant architecture helps to remove developer con- cerns regarding the use of the application by many concurrent users. Amazon S3 is storage for the Internet. Because demand for computing resources can vary drastically from one time to another, main- taining sufficient resources to meet peak requirements can be costly.
Multitenancy9 enables sharing of resources and costs among a large pool of users. Alternatively, such a system is one that can simulate a universal Turing machine. The implementation of the architecture is depicted in Figure 1. The architecture of SaaS-based applications is specifically designed to sup- port many concurrent users multitenancy at once.
Bill Gates realized that the WWW was the future and focused vast resources to begin developing a product to compete with Netscape. March 27, Imprint: The control pro- gram creates a simulated environment, a virtual computer, which enables the device to use hosted software specific to the virtual environment, some- times called guest software.
Usually, the service is billed on a monthly basis, just like a utility company bills customers. The new cloud model has made it pos- sible to deliver such new capabilities to new markets via the web browsers.
In fulfillment of that announcement, Microsoft Internet Explorer arrived as both a graphical Web browser and the name for a set of technologies. Each EIP can be assigned to a single instance. We are taking immediate action on the follow- ing: Request an e-inspection copy. Rittinghouse , James F. Book description Cloud Computing: Implementation, Management, and Security provides an understanding of what cloud computing really means, explores how disruptive it may become in the future, and examines its advantages and disadvantages.
Show and hide more. Table of contents Product information. Is the Cloud Model Reliable? Rittinghouse, James F. Get it now. Large-scale integration of circuits led to the development of very small processing units, the next step along the evolutionary trail of computing.
The was the first complete CPU on one chip and became the first commercially available microprocessor. It was possible because of the development of new silicon gate technology that enabled engineers to integrate a much greater number of transistors on a chip that would perform at a much faster speed.
This development enabled the rise of the fourth-generation computer platforms. By combining random access memory RAM , developed by Intel, fourth-generation computers were faster than ever before and had much smaller footprints.
As technology pro- gressed, however, new processors brought even more speed and computing capability to users. The microprocessors that evolved from the allowed manufacturers to begin developing personal computers small enough and cheap enough to be purchased by the general public. The first commercially available personal computer was the MITS Altair , released at the end of The PC era had begun in earnest by the mids.
Even though microprocessing power, memory and data stor- age capacities have increased by many orders of magnitude since the inven- tion of the processor, the technology for large-scale integration LSI or very-large-scale integration VLSI microchips has not changed all that much. The conceptual foundation for creation of the Internet was significantly developed by three individuals.
The first, Vannevar Bush,8 wrote a visionary description of the potential uses for information technology with his description of an automated library system named MEMEX see Figure 1.
It was finally published in July in the Atlantic Monthly. The second individual to have a profound effect in shaping the Internet was Norbert Wiener. Wiener was an early pioneer in the study of stochastic and noise processes. His work in stochastic and noise processes was relevant to electronic engineering, communication, and control systems.
This field of study formalized notions of feedback and influenced research in many other fields, such as engineering, systems control, computer science, biology, philosophy, etc.
His work in cybernetics inspired future researchers to focus on extending human capa- bilities with technology. Influenced by Wiener, Marshall McLuhan put forth the idea of a global village that was interconnected by an electronic nervous system as part of our popular culture. Licklider was given a mandate to further the research of the SAGE system. SAGE was started in the s and became operational by It remained in continuous operation for over 20 years, until Internet Software Evolution 9 Figure 1.
While working at ITPO, Licklider evangelized the potential benefits of a country-wide communications network. His chief contribution to the development of the Internet was his ideas, not specific inventions. He fore- saw the need for networked computers with easy user interfaces. His ideas foretold of graphical computing, point-and-click interfaces, digital libraries, e-commerce, online banking, and software that would exist on a network and migrate to wherever it was needed.
Roberts led the development of the network. There were so many different kinds of computers and operating systems in use throughout the DARPA community that every piece of code would have to be individually writ- ten, tested, implemented, and maintained.
Using this approach, each site would only have to write one interface to the commonly deployed IMP. The host at each site connected itself to the IMP using another type of interface that had differ- ent physical, data link, and network layer specifications. An application layer, built on top of the NCP, provided services such as email and file transfer. These applications used the NCP to handle connec- tions to other host computers.
A minicomputer was created specifically to realize the design of the Interface Message Processor. Because of this approach, the Internet architecture was an open architecture from the very beginning. The implementation of the architecture is depicted in Figure 1. Usually applied to procedures, processes, or theories based on faulty logic, or faulty personnel. Internet Software Evolution 11 Figure 1.
Image from luni. Figure 1. The AHHP specified how to transmit a unidirectional, flow-con- trolled data stream between two hosts. The ICP specified how to establish a bidirectional pair of data streams between a pair of connected host pro- cesses.
Development of this protocol was conducted by many people. Today, IPv4 is the standard protocol, but it is in the process of being replaced by IPv6, which is described later in this chapter. The second time, later that fall, they disabled NCP again for two days. Even after that, however, there were still a few ARPANET sites that were down for as long as three months while their systems were retrofitted to use the new protocol.
In , the U. IPv4 was never designed to scale to global levels. To increase available address space, it had to process data packets that were larger i. This resulted in a longer IP address and that caused problems for existing hardware and software. Solving those problems required the design, development, and implementation of a new architecture and new hardware to support it.
Following release of the RFP, a number of organiza- tions began working toward making the new protocol the de facto standard. Ted Nelson was one of the major visionaries of the coming hypertext revolution. He knew that the technol- ogy of his time could never handle the explosive growth of information that was proliferating across the planet.
Nelson popularized the hypertext con- cept, but it was Douglas Engelbart who developed the first working hyper- text systems. Navy radar technician in the Philippines. You can designate as many different kinds of links as you wish, so that you can specify different display or manipulative treatment for the dif- ferent types. His first project was Augment, and its purpose was to develop computer tools to augment human capabilities.
Part of this effort required that he developed the mouse, the graphical user interface GUI , and the first working hypertext system, named NLS derived from oN-Line System.
NLS was designed to cross- reference research papers for sharing among geographically distributed researchers. NLS provided groupware capabilities, screen sharing among remote users, and reference links for moving between sentences within a research paper and from one research paper to another. In the s, a precursor to the web as we know it today was developed in Europe by Tim Berners-Lee and Robert Cailliau.
Its popularity skyrock- eted, in large part because Apple Computer delivered its HyperCard prod- uct free with every Macintosh bought at that time. In , the effects of hypertext rippled through the industrial community.
HyperCard was the first hypertext editing system available to the general public, and it caught on very quickly. A tech- nology revolution few saw coming was in its infancy at this point in time. Internet Software Evolution 15 1. He joined forces with Berners-Lee to get the web initia- tive into high gear.
Cailliau rewrote his original proposal and lobbied CERN management for funding for programmers. He and Berners-Lee worked on papers and presentations in collaboration, and Cailliau helped run the very first WWW conference. In the fall of , Berners-Lee developed the first web browser Figure 1. A few months later, in August , Berners-Lee posted a notice on a newsgroup called alt. Once this infor- mation hit the newsgroup, new web servers began appearing all over the world almost immediately.
Following this initial success, Berners-Lee enhanced the server and browser by adding support for the FTP protocol. This made a wide range of existing FTP directories and Usenet newsgroups instantly accessible via a web page displayed in his browser. He also added a Telnet server on info. This web server came to be known as CERN httpd short for hypertext transfer protocol daemon , and work in it contin- ued until July Before work stopped on the CERN httpd, Berners-Lee managed to get CERN to provide a certification on April 30, , that the web technology and program code was in the public domain so that anyone could use and improve it.
This was an important decision that helped the web to grow to enormous proportions. Two students from the group, Marc Andreessen and Eric Bina, began work on a browser version for X-Windows on Unix computers, first released as version 0. This generated a huge swell in the user base and subsequent redistribution ensued, creating a wider awareness of the product. Working together to sup- port the product, Bina provided expert coding support while Andreessen Internet Software Evolution 17 Figure 1.
They monitored the newsgroups con- tinuously to ensure that they knew about and could fix any bugs reported and make the desired enhancements pointed out by the user base. Mosaic was the first widely popular web browser available to the gen- eral public. It helped spread use and knowledge of the web across the world.
Mosaic provided support for graphics, sound, and video clips. An early ver- sion of Mosaic introduced forms support, enabling many powerful new uses and applications. Innovations including the use of bookmarks and history files were added. Mosaic became even more popular, helping further the growth of the World Wide Web. In mid, after Andreessen had gradu- ated from the University of Illinois, Silicon Graphics founder Jim Clark col- laborated with Andreessen to found Mosaic Communications, which was later renamed Netscape Communications.
In October , Netscape released the first beta version of its browser, Mozilla 0. The final version, named Mozilla 1. It became the very first commercial web browser. The Mosaic programming team then developed another web browser, which they named Netscape Navigator. Netscape Navigator was later renamed Netscape Communicator, then renamed back to just Netscape. See Figure 1. During this period, Microsoft was not asleep at the wheel. Bill Gates realized that the WWW was the future and focused vast resources to begin developing a product to compete with Netscape.
In , Microsoft hosted an Internet Strategy Day18 and announced its commitment to adding Inter- net capabilities to all its products. In fulfillment of that announcement, Microsoft Internet Explorer arrived as both a graphical Web browser and the name for a set of technologies. Internet Software Evolution 19 Figure 1. It also included an add-on to the operating system called Internet Explorer 1.
One of the key factors in the success of Internet Explorer was that it eliminated the need for cumbersome manual installation that was required by many of the existing shareware browsers. The Netscape browser led in user and market share until Microsoft released Internet Explorer, but the latter product took the market lead in This was due mainly to its distribution advantage, because it was included in every version of Microsoft Windows.
The browser wars had begun, and the battlefield was the Internet. Netscape decided in to release a free, open source software version of Netscape named Mozilla which was the internal name for the old Netscape browser; see Figure 1.
Mozilla has steadily gained market share, particu- larly on non-Windows platforms such as Linux, largely because of its open source foundation. Mozilla Firefox, released in November , became very popular almost immediately. This technique was common and was used by many IT departments. To the user, it made little difference which CPU executed an application. Cluster manage- ment software ensured that the CPU with the most available processing capability at that time was used to run the code.
A key to efficient cluster management was engineering where the data was to be held. This process became known as data residency. They reasoned that if com- panies cannot generate their own power, it would be reasonable to assume they would purchase that service from a third party capable of providing a steady electricity supply. Grid computing expands on the techniques used in clustered computing models, where multiple independent clusters appear to act like a grid simply because they are not all located within the same domain.
Because of the distributed nature of a grid, computational nodes could be anywhere in the world. Paul Wallis explained the data residency issue for a grid model like this: It was fine having all that CPU power available, but the data on which the CPU performed its operations could be thousands of miles away, causing a delay latency between data fetch and execu- tion.
CPUs need to be fed and watered with different volumes of data depending on the tasks they are processing. A toolkit called Globus21 was created to solve these issues, but the infrastructure hardware available still has not progressed to a level where true grid computing can be wholly achieved.
The Globus Toolkit is an open source software toolkit used for building grid systems and applications. It is being developed and maintained by the Globus Alliance22 and many others all over the world. The Globus Alliance has grown into community of organizations and individuals developing fundamental technologies to support the grid model. The toolkit provided by Globus allows people to share computing power, databases, instruments, and other online tools securely across corporate, institutional, and geo- graphic boundaries without sacrificing local autonomy.
The cloud is helping to further propagate the grid computing model. Cloud-resident entities such as data centers have taken the concepts of grid computing and bundled them into service offerings that appeal to other entities that do not want the burden of infrastructure but do want the capa- bilities hosted from those data centers. Amazon S3 is storage for the Internet. According to the Amazon S3 website,23 it provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from any- where on the web.
The service aims to maximize bene- fits of scale and to pass those benefits on to developers. When a user creates a docu- ment, the application server sends it to the Centera storage system. The storage system then returns a unique content address to the server. The unique address allows the system to verify the integrity of the documents whenever a user moves or copies them. From that point, the application can request the document by submitting the address.
Duplicates of documents Internet Software Evolution 23 are saved only once under the same address, leading to reduced storage requirements. Centera then retrieves the document regardless of where it may be physically located.
Their cloud will monitor data usage and automati- cally move data around in order to load-balance data requests and better manage the flow of Internet traffic. Centera is constantly self-tuning to react automatically to surges in demand. The Centera architecture functions as a cluster that automatically configures itself upon installation. The system also handles fail-over, load balancing, and failure notification.
There are some drawbacks to these cloud-based solutoins, however. An example is a recent problem at Amazon S3. While we carefully monitor our overall request volumes and these remained within normal ranges, we had not been monitoring the proportion of authenticated requests.
Importantly, these crypto- graphic requests consume more resources per call than other request types. Shortly before am PST, we began to see several other users significantly increase their volume of authenticated calls.
The last of these pushed the authentication service over its maximum capacity before we could complete putting new capacity in place. In addition to processing authenticated requests, the authenti- cation service also performs account validation on every request Amazon S3 handles.
This caused Amazon S3 to be unable to process any requests in that location, beginning at am PST. By am PST, we had moved enough capacity online to resolve the issue. We are taking immediate action on the follow- ing: a improving our monitoring of the proportion of authenti- cated requests; b further increasing our authentication service capacity; and c adding additional defensive measures around the authenticated calls. The term was coined in the s in reference to a virtual machine sometimes called a pseudo-machine.
The creation and management of virtual machines has often been called platform virtualization. Platform virtualization is performed on a given computer hardware platform by software called a control program.
The control pro- gram creates a simulated environment, a virtual computer, which enables the device to use hosted software specific to the virtual environment, some- times called guest software. The guest software, which is often itself a complete operating system, runs just as if it were installed on a stand-alone computer.
Because the guest software often requires access to specific peripheral devices in order to function, the virtualized platform must sup- port guest interfaces to those devices. Virtualization technology is a way of reducing the majority of hardware acquisition and maintenance costs, which can result in significant savings for any company.
Server Virtualization 25 1. To improve performance, early forms of parallel processing were developed to allow interleaved execution of both programs simultaneously. The next advancement in parallel processing was multiprogramming.
In a multiprogramming system, multiple programs submitted by users are each allowed to use the processor for a short time, each taking turns and having exclusive time with the processor in order to execute instructions. It is one of the oldest, simplest, fairest, and most widely used scheduling algo- rithms, designed especially for time-sharing systems. All executable processes are held in a circular queue. The time slice is defined based on the number of executable processes that are in the queue.
For example, if there are five user processes held in the queue and the time slice allocated for the queue to execute in total is 1 second, each user process is allocated milliseconds of process execution time on the CPU before the scheduler begins moving to the next process in the queue.
New processes are always added to the end of the queue. The CPU scheduler picks the first process from the queue, sets its timer to interrupt the process after the expiration of the timer, and then dispatches the next process in the queue. The process whose time has expired is placed at the end of the queue. If a process is still running at the end of a time slice, the CPU is interrupted and the process goes to the end If the process finishes before the end of the time-slice, it releases the CPU voluntarily.
Every time a process is granted the CPU, a context switch occurs, which adds overhead to the process execu- tion time. To users it appears that all of the programs are executing at the same time. Resource contention problems often arose in these early systems. Explicit requests for resources led to a condition known as deadlock. Com- petition for resources on machines with no tie-breaking instructions led to the critical section routine. Contention occurs when several processes request access to the same resource.
In order to detect deadlock situations, a counter for each processor keeps track of the number of consecutive requests from a process that have been rejected.
Once that number reaches a predetermined threshold, a state machine that inhibits other processes from making requests to the main store is initiated until the deadlocked process is successful in gaining access to the resource. Here, two or more processors share a common workload. This arrange- ment was necessary because it was not then understood how to program the machines so they could cooperate in managing the resources of the system.
Vector processing was developed to increase processing performance by operating in a multitasking manner. Matrix operations were added to com- puters to allow a single instruction to manipulate two arrays of numbers performing arithmetic operations. This was valuable in certain types of applications in which data occurred in the form of vectors or matrices.
In applications with less well-formed data, vector processing was less valuable. The primary goal is to achieve sequential consistency, in other words, to make SMP systems appear to be exactly the same as a single-processor, multipro- gramming platform. However, programmers had to deal with the increased complexity and cope with a situation where two or more programs might read and write the same operands simultaneously.
This difficulty, however, is limited to a very few programmers, because it only occurs in rare circumstances. To this day, the question of how SMP machines should behave when accessing shared data remains unresolved. Data propagation time increases in proportion to the number of pro- cessors added to SMP systems.
After a certain number usually somewhere around 40 to 50 processors , performance benefits gained by using even more processors do not justify the additional expense of adding such proces- sors. To solve the problem of long data propagation times, message passing systems were created.
In these systems, programs that share data send mes- sages to each other to announce that particular operands have been assigned a new value. There is a network designed to support the transfer of messages between applications. This allows a great number processors as many as several thousand to work in tandem in a system. These systems are highly scalable and are called massively parallel processing MPP systems.
In this form of computing, all the processing elements are interconnected to act as one very large computer.
This approach is in contrast to a distributed computing model, where massive numbers of separate computers are used to solve a single problem such as in the SETI project, mentioned previously. In data mining, there is a need to per- form multiple searches of a static database. The earliest massively parallel Single-chip implementations of massively parallel processor arrays are becoming ever more cost effective due to the advancements in integrated- circuit technology.
An example of the use of MPP can be found in the field of artificial intelligence. For example, a chess application must analyze the outcomes of many possible alternatives and formulate the best course of action to take. Another example can be found in scientific environments, where certain simulations such as molecular modeling and complex mathematical prob- lems can be split apart and each part processed simultaneously.
Parallel data query PDQ is a technique used in business. This technique divides very large data stores into pieces based on various algorithms. Rather than searching sequentially through an entire database to resolve a query, 26 CPUs might be used simultaneously to perform a sequential search, each CPU individually evaluating a letter of the alphabet.
MPP machines are not easy to program, but for certain applications, such as data mining, they are the best solution. Examining the history of computing hardware and software helps us to understand why we are standing on the shoulders of giants. We discussed how the rules computers use to communicate came about, and how the development of networking and communications protocols has helped drive the Internet technology growth we have seen in the last plus years.
This, in turn, has driven even more changes in protocols and forced the creation of new technologies to mitigate addressing concerns and improve the methods used to communicate over the Internet. The use of web browsers has led to huge Internet growth and a migration away from the traditional data center. In the next chapter, we will begin to examine how services offered to Internet users has also evolved and changed the way business is done.
Chapter 2 Web Services Delivered from the Cloud 2. Infrastructure is also a service in cloud land, and there are many variants on how infrastructure is managed in cloud environments. When vendors outsource Infrastructure-as-a-Service IaaS , it relies heavily on modern on-demand computing technology and high-speed networking.
Outsourced hardware envi- ronments called platforms are available as Platforms-as-a-Service PaaS , and we will look at Mosso Rackspace and examine key characteristics of their PaaS implementation. As technology migrates from the traditional on-premise model to the new cloud model, service offerings evolve almost daily. Our intent in this chapter is to provide some basic exposure to where the field is currently from the perspective of the technology and give you a feel for where it will be in the not-too-distant future.
Web service offerings often have a number of common characteristics, such as a low barrier to entry, where services are offered specifically for con- sumers and small business entities.
Often, little or no capital expenditure for infrastructure is required from the customer. While massive scalability is common with these types of offerings, it not always necessary. Many cloud vendors have yet to achieve massive scalability because their user base gener- ally does not require it. Multitenancy enables cost and resource sharing across the often vast user base. Providers of this type of cloud-based solution known as CaaS vendors are responsible for the management of hardware and software required for delivering Voice over IP VoIP services, Instant Messaging IM , and video conferencing capabilities to their customers.
This model began its evolutionary process from within the telecommunications Telco industry, not unlike how the SaaS model arose from the software delivery services sector. CaaS vendors are responsible for all of the hardware and software management consumed by their user base. CaaS is designed on a utility-like pricing model that provides users with comprehensive, flexible, and usu- ally simple-to-understand service plans. CaaS service offerings are often bundled and may include integrated access to traditional voice or VoIP and data, advanced unified communi- cations functionality such as video calling, web collaboration, chat, real- time presence and unified messaging, a handset, local and long-distance voice services, voice mail, advanced calling features such as caller ID, three- way and conference calling, etc.
A CaaS solution includes redundant switching, network, POP and circuit diversity, customer premises equipment redundancy, and WAN fail-over that specifi- cally addresses the needs of their customers. All VoIP transport components are located in geographically diverse, secure data centers for high availability and survivability. CaaS offers flexibility and scalability that small and medium-sized busi- ness might not otherwise be able to afford.
CaaS service providers are usu- ally prepared to handle peak loads for their customers by providing services 1. Communication-as-a-Service CaaS 31 capable of allowing more capacity, devices, modes or area coverage as their customer demand necessitates. Network capacity and feature sets can be changed dynamically, so functionality keeps pace with consumer demand and provider-owned resources are not wasted.
CaaS requires little to no management oversight from customers. With a CaaS solution, customers are able to leverage enterprise-class communication services without having to build a premises-based solution of their own. This allows those customers to reallocate budget and personnel resources to where their business can best use them.
Hosted and Managed Solutions Remote management of infrastructure services provided by third parties once seemed an unacceptable situation to most companies. However, over the past decade, with enhanced technology, networking, and software, the attitude has changed. This is, in part, due to cost savings achieved in using those services.
Along with features such as VoIP and unified communications, the integration of core PBX features with advanced functionality is managed by one vendor, who is responsible for all of the integration and delivery of services to users. The development process and subsequent introduction of new features in applications is much faster, eas- ier, and more economical than ever before.
Customers pay a fee usually billed monthly for what they use. Customers are not required to purchase equipment, so there is no capital outlay. Bundled in these types of services are ongoing maintenance and upgrade costs, which are incurred by the service provider. The use of CaaS services allows companies the ability to collaborate across any work- space. Better communication allows organizations to adapt quickly to market changes and to build competitive advantage. CaaS can also accelerate decision making within an organization.
Innovative uni- fied communications capabilities such as presence, instant messaging, and rich media services help ensure that information quickly reaches whoever needs it. Flexible Capacity and Feature Set When customers outsource communications services to a CaaS provider, they pay for the features they need when they need them. The service pro- vider can distribute the cost services and delivery across a large customer base. As previously stated, this makes the use of shared feature functionality more economical for customers to implement.
Economies of scale allow ser- vice providers enough flexibility that they are not tied to a single vendor investment. They are able to leverage best-of-breed providers such as Avaya, Cisco, Juniper, Microsoft, Nortel and ShoreTel more economically than any independent enterprise. Since the invention of the integrated circuit in , the number of transis- tors that can be placed inexpensively on an integrated circuit has increased exponentially, doubling approximately every two years.
Unlike IC components, the average life cycles for PBXs and key com- munications equipment and systems range anywhere from five to 10 years. With the constant introduction of newer models for all sorts of technology PCs, cell phones, video software and hardware, etc. CaaS vendors must absorb this burden for the user by continuously upgrading the equipment in their offerings to meet changing demands in the marketplace.
Gordon E. There is no extra expense for the constant power consumption that such a facility would demand. If your business experienced a serious or extended communications outage, how long could your company survive? It mitigates risk and allows companies in a location hit by a catastrophic event to recover as soon as possible.
Unlike data continuity, eliminating single points of failure for a voice net- work is usually cost-prohibitive because of the large scale and management complexity of the project.
With a CaaS solution, multiple levels of redun- dancy are built into the system, with no single point of failure. IaaS providers manage the transition and hosting of selected applica- tions on their infrastructure. Customers maintain ownership and 3.
0コメント