By continuing to browse this site you agree to us using cookies as described in About Cookies
Notice: Please be advised that we experienced an unexpected issue that occurred on Saturday and Sunday January 20th and 21st that caused the site to be down for an extended period of time and affected the ability of users to access content on Wiley Online Library. This issue has now been fully resolved. We apologize for any inconvenience this may have caused and are working to ensure that we can alert you immediately of any unplanned periods of downtime or disruption in the future.
The Web service has become an indispensable part of our lives. In general, the Web service provides various information based on text or multimedia objects over the Hypertext Transfer Protocol (HTTP). Sustaining availability against various attacks is one of the most critical issues for high-profile Web sites such as banks and credit card payment gateways. Inability in access may cost a company a significant damage in its reputation and revenue. When life-critical services are provided in Web service, the situation is even worse. This is why companies are battling attacks aiming unavailability almost on a daily basis.
Availability mainly depends on two factors: the computing capability of a Web server and the network bandwidth to which the Web server is connected. The technologies to improve computing capability have been rapidly grown and can be easily adapted to current systems. However, the growth of network technologies is slightly lower than that of computing capability . Moreover, it is demanding that new network technologies are applied to our daily lives because they require not only new hardware interface but also the corresponding infrastructure. That is, even with the rapid growth of computing capability, network bandwidth has not been improved enough. Moreover, the resources that a Web server has to provide are becoming much bigger (e.g., swf and mp4), and the number of users and devices that are connected to the Internet has been rapidly increasing [2, 3]. This implies that the best practice to provide availability of Web services is to reduce Web traffic when possible. Because of current surge in worldwide occurrences of distributed denial-of-service (DDoS) attack, there is much research in how to deplete network traffic. In this paper, we define traffic consumption as a behavior that wastes traffic of a target Web server. A malicious user can waste the traffic of a target Web server, and he or she not only hampers the Web service but also causes denial of service (DoS). A DoS attack is an attempt to prevent a Web server from functioning efficiently or at all, unable to provide the legitimate use of a service. In a DDoS attack, attackers operate their attempts in distributed manner. Mirkovic et al.  described the taxonomy of DDoS attack. When there is no concerted efforts of a malicious user, threats against availability may still exist. The Slashdot effect , also known as Reddit effect or Flash crowd, takes place when either a site or a Twitter user, having the massive number of users, posts a link of a relatively smaller site that has poor environment. As a result, many users may attempt to connect to the site at the same time, causing a massive increase in traffic or even temporarily shutdown. Of course, certainly, this effect can be utilized by a malicious user by posting the link of a target server in several popular Web sites.
There is a case that the goal of attack is not to make service unavailable. Economic denial-of-sustainability (EDoS) attack [6, 7] is similar to DDoS attack, but its purpose is different. Whereas the goal of DDoS attack may be to blow up a target Web server, the goal of EDoS attack is to increase economical loss to the target Web server by requesting relatively higher resource. For example, a Web server such as Amazon EC2  is under the system of charging its traffic. In the case of DDoS, at least, the target server is able to recognize that it is under DDoS attack by traffic monitoring. However, the detection of EDoS is much more difficult because the amount of traffic increased by EDoS attack may be widely spread and unnoticeable. A basic solution to thwart these traffic consumption attacks is detect-and-cut method . Unfortunately, detect-and-cut method may not be an efficient approach because some attacks such as EDoS is not detectable. To mitigate these problems, this paper proposes client cloud Web service layer (CCWS layer) as a middle layer between HTTP layer and Transmission Control Protocol/Internet Protocol (TCP/IP) layer. The proposed layer enables clients to exchange Web contents, consequently reducing the traffic on a server and providing compatibility with current Web protocols in the public Web services.
2 Related Works and Contributions
2.1 Related works
There are two approaches for reducing the traffic of Web servers: server-side and client-side approaches. The server-side approach enhances the availability of Web services by provisioning extra server resources. Baentsch et al.  analyzed the benefits that can be obtained from both caching and replication. Cardellini et al.  surveyed Web system architectures that consist of multiple server nodes distributed on a local area, with one or more mechanisms to spread client requests among the nodes. Freedman et al. presented CoralCDN  and demonstrated its superiority . CoralCDN deconcentrates traffic flows by using additional entities (Coral HTTP proxies) that provide clients with Web services along with a main server. This forms intermediate proxy nodes between Web servers and clients in order to cache the Web resources. It efficiently alleviates the traffic of the Web server and reasonably distributes the load in the infrastructure by using their own overlay routing technique. These solutions as well as other content delivery network solutions [13, 14], however, have a common challenge; they require maintenance costs caused by additional servers even while services are uncongested.
As an alternative approach, exploiting the availability on client's resource, the client-side approach using peer-to-peer (P2P) protocols has been also explored. We note that P2P is extensively being used for exchanging information between clients without Web server such as CoopNet [15-18]. To reduce Web server side congestion, Mattson et al.  developed an HTTP/1.1 overlay protocol, which is redirecting a request of a client to the previous client, called HTTP-P2P. Once a peer requests, a server negotiates with the peer whether it has the will to redistribute the requested content or not. When the peer accepts redistributing the content, the server redirects the request to the peer for a while. Mattson et al. [20-22] proposed another approach of reserving bandwidth by compressing HTTP request and response header. This would slightly reduce the traffic of server but cannot resist against huge traffic consumption. Terrace et al. [23, 24] introduced another method called Firecoral to exchange contents of a Web server by sharing cache of a Web browser between clients. Their method has a merit that there is no need to modify the current HTTP protocol, compared with HTTP-P2P. It uses two external servers for managing and authenticating its service. One is a tracker server that maintains peer information, and another is a trusted third party for signing service. The problem of this method is that the functionality and security converge on these external servers. Thus, when their external server is in congestion, the congestion is propagated to Web servers that are connected to the external server. Consequently, the availability of a Web service is inevitably affected by the external server. Their method also has a compatibility issue between different Web browsers, that is, a browser cache cannot be used for another browser. For instance, Mozilla Firefox and Google Chrome have different cache structure; thus, their caches should be transformed into another structure in order to exchange them, which is not widely studied.
To enhance the availability of Web service, our approach takes client-side solution by using P2P protocols, that is, by utilizing resource of clients by sharing resources of Web server. Our solution can effectively reduce the traffic of a Web server when both detectable and undetectable traffic consumptions are launched. The proposed architecture is based on an additional layer, called client cloud Web service layer (CCWS layer). A CCWS layer in a client serves as an underlying layer that discovers specific Web components in other clients for the upper layer. The layer hence enables Web components to be delivered between clients, so forming cloud consisting of clients, named client cloud, provides Web services. To utilize resource of clients, the proposed scheme uses P2P communication, which has been already widely used in our daily lives. There are several P2P systems and applications that have been validated [25-30]. The P2P system is generally used in single file sharing among many clients. They willingly participate in the networks with their resource, if necessary. In addition, the Wikimedia now experimentally adapts file-sharing mechanism for distributing its video contents to reduce bandwidth cost . The schemes of CoralCDN, HTTP-P2P, and Firecoral also used P2P communication to share resources between clients. We note that CoralCDN is relatively heavy when a client accesses the Web server first and that HTTP-P2P may not work as expected. Also, the scheme in Firecoral is costly because of the need of extra entities, whereas our scheme uses P2P communication without any compatibility problems, as explained later.
The proposed architecture is comparable with the schemes of CoralCDN, HTTP-P2P, and Firecoral. A request of the first client in CoralCDN always passes the CoralCDN systems. This seems to be a delayed process for the first accessing clients. Also, the critical difference with the scheme of HTTP-P2P is that HTTP-P2P is not compatible to the current HTTP because it requires its own control header in HTTP message and hence needs to modify the current HTTP layer, whereas our architecture uses CCWS layer, which is in charge of sharing resources among clients and hence can be employed without any modification of the current HTTP layer. The critical difference with the scheme of Firecoral is that it does not work between different Web browsers and requires a trusted third party to verify a digital signature, which is relatively a heavy operation. Another drawback of the scheme of HTTP-P2P is that a client having shown the will to redistribute its contents may, in fact, refuse to redistribute, which, in turn, causes a re-request message to the server and more aggravates the congestion. Meanwhile, in our approach, a CCWS layer provides a list of clients owning the requested components, which enable a client to try several choices in sequence. The details of comparison of our scheme with previous schemes are shown in Section 4.1.5.
The effectiveness of the proposed architecture in conserving the traffic is shown by NS-2 simulation. We note that the increase in the traffic may stem from the increase in the size of Web items or the increase in the number of requests. It turns out that our method is more efficient when the number of requests increases, which is more desirable. To show this, we simulate the same amount of Web server traffic with two conditions: the small number of client alongside large size of contents of Web server and otherwise.
The rest of the paper is composed as follows. In Section 3, we propose the CCWS architecture and its logic in both server and client. Section 4 discusses the security of the architecture, simulates the proposed architecture, and analyzes the simulation results. Section 5 finally concludes our paper.
3 Client Cloud Web Service
3.1 Hypertext Transfer Protocol-based Web service
To provide Web service, HTTP  is widely used. HTTP is a traditional server–client model application and is driven by request and then response. When an HTTP request message from a client is delivered to a Web server, the server answers with an HTTP response containing its item. If a response is unavailable because of some reasons, the HTTP response that includes those reasons is sent to the client.
3.2 Description of the client cloud Web service layer
To enhance the availability of Web services, our approach takes client-side solution by using P2P protocols. The key component of the proposed architecture is CCWS layer, which resides between HTTP layer and TCP/IP layer as a middle layer. This layer is in charge of managing all messages flowing between HTTP layer and TCP/IP layer. Figure 1 shows the configuration of the proposed architecture. The left side of Figure 1 means the architecture of Web server. CCWS layer of the Web server stores all of Web items and frequently refresh its CCWS table. The right side of Figure 1 means a group of clients who try to access the Web server. While HTTP layer request Web items to the Web server, CCWS layer maintains CCWS table and discover items among proper clients.
In normal HTTP, upon receiving an HTTP request message from a client, a Web server delivers the requested item such as html, jpg, and gif files to the client. However, in the proposed architecture, upon receiving an HTTP request message from a client, a Web server delivers the requested item or the address of a client that possesses the corresponding item. This is decided by CCWS layer in a server, on the basis of the threshold value P considering the number of requests from clients. For this purpose, CCWS layer maintains a table, called CCWS table, that records addresses of clients having items. This table is updated whenever the server sends an item to a requesting client. When the client receives CCWS table instead of the item it requested, it requests the item to one of clients possessing the corresponding item by referring the table. When the client receives the item instead of the table, it is temporally stored it in cached storage, called CCWS storage for redistributing it.
We may expect that at the early stage of a Web service, items are directly distributed to clients. When the sufficient number of items is distributed among clients, clients are able to dynamically share their cached items for Web service. This feature can remarkably reduce the traffic of a Web server. Analyzing how to decide the value of and its impact on the traffic of the serve is of great concern. This is performed in Section 4.
3.3 Structure of client cloud Web service table
A Web server stores several Web component files. We denote this component as item, suchlike image, html, and rich object. They are saved as files in a CCWS-storage, which is denoted by , where
Among them, item0 usually becomes a main page (e.g., index.html) of the Web server. CCWS table has six attributes for an item as follows:
booli is a Boolean value that is decided by the Web server. If the value is true, CCWS layer in a client requests the corresponding item to other client. Otherwise, CCWS layer just requests the item to the server. This is necessary for server-side scripting contents because they are always generated newly. ki is a unique identifier of the item in CCWS table. It is generated by ki = Hash(itemi||ni||si||ti), where Hash is a cryptographic hash function such as SHA1 with collision-resistant property. ni is the file path and its name. ti is time information that determines the freshness of itemi by setting limited available time. is the list of addresses of clients possessing itemi as follows:
where addrx is an address of a client x. has the first-in first-out structure. is distributed only by the Web server. and the fast flux  may seem similar. However, they are certainly different. The list of clients (multiple IP addresses) in the fast flux is used to assign a single domain name multiple IP addresses. On the other hand, the list of clients in is used to assign each content the corresponding IP addresses. CCWS table includes a signature σ as authentication information as follows:
We summarize our notations in Table 1. Because CCWS table is transmitted in place of real items, our protocol can reduce the traffic of the Web server. Thus, the size of is an important issue for reserving bandwidth. Note that the size of relies on the number of items and the size of but is irrelevant to the size of each element.
Table 1. Notations.
HTTP request message
HTTP response message
CCWS request message
CCWS response message
CCWS table for Web service
hash value of item, n, s and t
CCWS storage for storing item
Message for challenge and response
Absolute path and name of item
Size of item
Valid time of item
List containing address of clients
The size of
address of x
Cryptographic hash function
σ ← Sign(m)
We may assume that bool, k, n, s, t, and are 1, 20, 64, 4, 4, and l ∗ 4 bytes, respectively, where SHA1 algorithm for a cryptographic hash function and fixed port number are adopted. We further assume that the size of σ is 128 bytes. According to Google statistics , the average number of GET message per one host (i.e., the number of items) is 24. With the assumption that the number of addresses in each list is 4, the total size CCWS table is approximately 2.6 kB, which is relatively very small considering the size of each item. For example, the size of an image file requires several hundreds of kilobytes.
3.4 Overview of client cloud Web service
When a Web server provides services for the first time, it has to classify its components as either dynamic or static. The dynamic content is the server-side scripting object, which is frequently generated by server. Contrastively, the static component is the file that is rarely changed. The static component will be mainly shared among clients through CCWS layer. After the initializing step, the server maintains the list of clients possessing each item in CCWS table. When a client requests a component to the server, the server decides whether it sends an item as a response or addresses of other clients who have the item. In the former case, the server records the address of the client in CCWS table and sends item. Otherwise, the server just sends CCWS table. Clearly, this approach does not need to modify current HTTP protocol suite of Web service because the procedure is only handled by CCWS layer. Therefore, our architecture can be easily adopted in the current Web environment (Figure 2).
3.4.1 Web server logic
Algorithm 1 shows the logic of a Web server adopting CCWS layer. While initializing Web server, the server allocates memory space for .
Initializing CCWS table: First of all, given itemi, the Web server decides the property of the item as static or dynamic. If it is static, the Web server sets booli as true. Otherwise, booli is set to be false. Next, the server collects the information of itemi such as the name including the path of item ni and the size si. The ti indicates a lifetime of itemi. If the Web server cannot assure durability of itemi, its lifetime is decided to be shorter than the others. Finally, the server computes ki = Hash(itemi||ni||si||ti). This hash value guarantees both the uniqueness and the integrity of itemi. A client who tries to modify itemi has to find a collision such that is equal to ki for modified item . But it is computationally infeasible because of the collision-resistant property of the cryptographic hash function. This procedure is repeated for all the items in . After that, the series of ki is signed with the public key of the Web server, σ ← Sign(k0||k1|| … ||km − 1). Initializing CCWS table is described from line 1 to line 16 of Algorithm 1.
Parameter: After initializing , the Web server sets parameter , which decides how many requests will be responded by delivering items. For instance, given , the Web server delivers the requested item for the 70% of incoming requests. On the other hands, the 30% of requests is responded by sending CCWS table to let the requests be handled by other clients. If is 1.0, it is a normal Web-server based on HTTP. By , the Web server can adjust trade-off between covering traffic itself and utilizing the resource of clients.
Handling HTTP request: Upon receiving a new HTTP request , the Web server checks if the property of the requested item is static or dynamic by looking up boolitem. If it is static, the server picks probability p in uniform random. When , the Web server sends HTTP request including the item directly and enqueues the address of the source to . Otherwise, the server sends CCWS table to the source client. Handling HTTP request is described from line 17 to line 26 of Algorithm 1. When it is necessary to update , the server pauses and reinitializes .
3.4.2 Client logic
Algorithm 2 shows the logic of a client with CCWS layer. CCWS layer handles messages from upper layer and lower layer differently.
Case 1: from upper layer. If a message is from upper layer (HTTP layer), it is definitely an HTTP request to a Web server. When a client requests an HTTP request, HTTP layer encapsulates the message with the requested item by HTTPREQ(nitem, addrto) where addrto is the address of a Web server. HTTP layer sends the message to the server and waits for the response. After a request message is sent from HTTP layer, CCWS layer checks the existence of the requesting item in . The result, denoted by , can be categorized into three types.
is: This means that the requested item is in the table . If the titem is valid, CCWS layer gets the latest address of the corresponding client by dequeuing and sending a CCWS request. Otherwise, CCWS layer sends the original HTTP request.
is (kitem, null): This happens by two reasons. The first case is that is empty because CCWS layer dequeued the used address of clients and consumed up all addresses. Another case is when the requested item is a dynamic component and the request has to be served directly by the Web server. In both cases, CCWS layer relays the original HTTP request.
is null: CCWS layer simply sends the request message to the Web server because the requested item does not exist in CCWS table.
Case 2: from lower layer. The messages from lower layer are categorized into the following four types.
is HTTPRES(item): This is a normal HTTP response, containing the requested item. The Web server has delivered this item directly for two reasons: item is dynamic or probability test for sharing was . Insofar as the latter, (bool)item = true and CCWS layer stores item to its storage.
is: This is a response message from the Web server that indicates the item should be re-requested to one of the clients in . Upon receiving the message, the client must verify its signature value in with the public key of the Web server. If the signature is valid, the client can trust every kitem in CCWS table. Next, the CCWS layer of the client sends CCWSREQ(kitem,addrto), where addrto is the address of a client possessing item. If CCWS layer receives this, it updates its table information immediately.
is CCWSRES(item,addrfrom): This is a response message from other client for the CCWS request. Upon receiving CCWSRES(item, addrfrom), CCWS layer checks its integrity. If it is not modified, CCWS layer transforms it into HTTP response to be compatible with the upper layer. If integrity verification is failed, CCWS layer requests it to other client. CCWS layer iterates the request until flushes up the last address.
is CCWSREQ(kitem,addrto): This is a request of item from CCWS layer of other client to itself. CCWS layer checks both the existence of item and the freshness of titem. If it is available, CCWS layer will send item to the source client.
4 Discussions and Simulation
4.1.1 Usefulness of proposed protocol
To apply the proposed protocol to the real world, it would be necessary to address several realistic issues in addition to completeness and efficiency. In this section, we discuss two major issues by referring existing surveys. The first is how many static objects are covered in serving Web objects, and the second is the possibility of persuading clients to contribute their resources for the requested services.
The proportion of static items: Because the mechanism of sharing Web items can only be adopted on static items, identifying how many static items exist in total Web components is a crucial factor. In other words, the more static items occupy serving object, the more the proposed protocol guarantees efficiency.  and  track on the size of Web components. While the total Web size continues to grow up, the average number of items is 85 objects per page and average size of Web page is 679 kb. According to , which is the latest survey on Web statistics (August 2011), the average size of static item like html, images and swf is approximately 622 kb and, on the other hands, the average size of dynamic item is 147 kb. In general, one Web page is made up of single script page (dynamic content) and multiple static contents while Web service serves server-side scripting. This implies that the Web server can reasonably reserve its bandwidth because of the more static contents.
Willingness of clients: Our protocol could be considered as practical only if the clients readily accept contributing their computational or network resources. The willingness of clients seems to be alluded in widely used P2P networks. The conception of P2P networks is more and more developing to various usage of many applications. It is already widely adopted in many file sharing applications such as BitTorrent , which is the most successful P2P application.  shows a remarkable result that the traffic of P2P protocol averagely counts 55.75% in total Web traffic. This implies that the clients are ready for contributing their resources to get prompt Web service rather than tolerating a long delay caused from heavy traffic consumption.
4.1.2 Denial of redistribution
HTTP-P2P , as briefly discussed earlier, has a weakness against denial-of-redistribution attack, which means that an adversary requests and gets resources in order to disturb seamless Web service by refusing redistribution. Because  negotiates with clients for the intention of redistribution, the adversary would abuse this feature for denial-of-redistribution attack. Moreover, it aggravates network traffic because clients whose requests are rejected from the adversary re-request objects to the Web server. The effect of this attack would be more applicable when an attack is distributed. We define Prdor that the client suffers denial-of-redistribution attack, and the probability can be derived from Prob(new client meets malicious) ∩ Prob(new client is redirected). Let the number of valid clients be and the number of adversaries be . Assume that there is a single object in Web server and clients have formed Web service and each of Ntotal clients already has the object, which means Ntotal nodes simultaneously have access to the Web server. Prdor of  can be simply defined as
where counter is a predefined value in , and it indicates how many times one client should redirect object. As the same environment with ours, we use the additional parameter , which has multiple addresses of other clients. We denote l as the length of . The main difference between previous schemes and CCWS is that the adversaries has to fill their addresses because a single valid address in can relieve denial of redistribution. Prdor can be derived from
Thus, Prdor of CCWS will be
Firecoral  is also comparable in this manner. In case of Firecoral, does not exist, but l can be regarded as the number of available peer that the tracker responses to; thus, the probability should be
Table 2 summarizes Prdor of these three protocols as described previously.
Table 2. Prdor of HTTP-P2P(counter), Firecoral(num of peers), and CCWS .
4.1.3 Dealing with churn
Because our protocol has the substrate of the P2P distribution system, it should cope with existing problems of the P2P network. Swift arrival and departure of massive clients cause the significant failure of the service in the distribution system. This situation, called churn , is the most momentous issue in the P2P system. Clients in network join, leave, and come back several times. Sometimes they even leave forever because they do not use the system anymore. In our protocol, churn may lead to a flood of unavailable CCWS table . Our strategies to alleviate churn are as follows:
Churn problem from natural activity of honest clients: When the clients rapidly join and leave CCWS network, this may cause the abnormal status to CCWS network. We may consider the situation that a client receiving CCWS request message have already left the Web server or closed the browser. Consequently, the client who sent CCWS request message should find other clients. In CCWS, the client who gets CCWS request failure still can access other clients in . In addition, because the Web server provides the freshest CCWS table when it redirects a client to the others, address information for redistribution can be correctly maintained. Finally, a client who receives a content from the server is referred only for lifetime of that content. They do not need to keep its connections with CCWS network longer than the lifetime of contents.
Malicious adversaries: We can also consider the case that malicious adversaries frequently request the content to the Web server in order to induce performance decline. This case, as previously defined in Section 4.1.2, is denial of redistribution. Although CCWS network does not apply the rule of redistribution membership, this can be overcome because adversaries cannot handle the parameter of Web servers such as or . As described in Section Section 4.1.2, our protocol alleviates an effect of malicious adversaries by . The length of further reduces malicious hampering. Therefore, adversaries who try to possess the traffic of a Web server should have the capability of controlling the much more number of malicious clients than they expect, as reducing or lengthening the size of .
4.1.4 Computational overhead
Our protocol has a couple of cryptographic operations that may be regarded as a heavy operation. To check the computational burden of the Web server, we categorized and implemented such operations into three types: a cryptographic hash operation, a digital signature, and updating . We implemented them by C language and OpenSSL 1.0.0e library and ran them on Intel i5-2500. When it uses 1024 bits of RSA-SHA-1 algorithm and SHA1 operation, a single signature generation requires 0.003579 s. In the case of SHA-1, it operates approximately 250 MBps. These are fast enough to apply the realistic environment. Furthermore, these cryptographic operations only occur when CCWS layer in the Web server has to update its CCWS table; a Web server changes its content information or updates time information t. Hence, the burden of these operations does not depend on the number of request from clients.
The number of updating is directly related to that of requests. is a series of 4-byte IP addresses. Note that is a queue model data structure. Given new data input, drops the latest data and adds new one to itself alongside shifting positions of all in . This can be implemented by l integer substitution operation, where l is the number of clients' address. In the same computing environment, the count of updating is averagely 35,000 k/s whenever the length l is 3–10. As a result, even while the Web server receives a large number of clients' requests, these operations related to form CCWS table would pass by computational burden of the Web server.
4.1.5 Comparison with previous schemes
In this subsection, we compare CCWS with previous schemes [12, 19, 24] in various angles to clearly show merits of our scheme. All these schemes have the same goal, which is to reserve bandwidth of Web servers, but have different way to achieve it. Table 3 shows their comparison results.
Low latency of the first access: When a client wants to access a Web server and there is no peer having Web objects, the Web server in all schemes has to distribute its object. In HTTP-P2P, Firecoral, and CCWS, the Web server directly sends its objects to the client. However, in the case of CoralCDN, the request message stays in the CoralCDN system. Meanwhile, CoralCDN system resolves the coralized Domain Name System query and finds the appropriate node in the system. If no node has the object of the Web server, Coral proxy accesses the original server and stores its resources. After that, the proxy can serve the client. Because these steps take place for every client who tries to firstly access the Web server, CoralCDN has delayed response time in that case.
Low cost of peers for ready to share: In the case of Firecoral, clients who have the actual content for sharing should be forced into additional process. A peer in Firecoral should acquire the signature of object from a singing service (we omit the steps of signing service), and it should register the information that it has object with the signature. Other schemes do not need to involve any processes for preparation to share.
Small logical hop counts under the churn condition: We may consider the situation of the number of hop counts when a redirected request is denied by a malicious peer or is not reached because of the churn condition. In this case, we exclude CoralCDN because the serving entities seem to be reliable and are not actual clients. The number of hops of HTTP-P2P is relatively lager than other two schemes. Because HTTP-P2P uses single redirection address, a client who failed to acquire the actual contents should re-request to the Web server. That is, (1) a client request to a peer is denied, (2) the client should re-request to the Web server, and (3) the server responses and the client requests to another peer. However, the client of Firecoral and CCWS has multiple addresses of serving peers. So these two schemes are more resilient than HTTP-P2P in this situation.
Load balancing ability by the Web server: The Web server in CoralCDN and Firecoral has no awareness of the condition such as the number of involving client. Also, it does not have the capability to manage the proportion of sharing clients. Even though the Web server loses its bandwidth on and on, the Web server would not survive because it has no way to control the load balancing. Also, if there is no copy of objects in peers while the server is offline, these two schemes would not deal with this circumstance. On the other hand, the Web server in HTTP-P2P and CCWS can reserve its bandwidth in even high rate of request by adjusting the proportion of redirection (e.g., redirection counter in HTTP-P2P or in CCWS). This ability makes the Web server more resistant to unpredictable incoming traffic.
Reliable lifetime of shared objects: We suppose that the lifetime of the shared object in CoralCDN is longer than others because the serving peers (Coral HTTP Proxy) can be regarded as authenticated and trusted. But the serving peers in other three schemes are actual clients; thus, we cannot assure their reliability.
Table 3. Comparison between CCWS and previous schemes.
Low latency of the first access
Low cost of peers for ready to share
Small logical hop counts under churn condition
Load balancing ability by the Web server
Reliable lifetime of shared objects
We conducted performance evaluations of our architecture as contrasted with the traditional Web service model. We summarize our simulation parameters in Table 4.
Table 4. Simulation parameters.
The number of subrouters
The number of clients for each subrouter
The total number of clients, that is,
The size of items in Web server
The probability of distribution for
The rate of requesting clients
4.2.1 Simulation environment
To measure traffic in Web server side more precisely, we form a routing domain between a Web server and clients. We configure a Web server as connected with a main router that has subrouters. Every subrouter is connected to each other. Every subrouter has clients, and there is no direct link between clients. Thus, the maximum hop count from one client to another client is . The link from Web server to subrouters has the delay time 20 ms and the bandwidth 10 Mb in bidirectional and full-duplex mode. The links between subrouters are also the same as those discussed previously. Every router adapts tail drop queuing method. The links between subrouters and clients randomly adapt the delay time 20–50 ms and the bandwidth 1–5 Mb. While conducting simulations, the numbers of request and response packets in the server-side link as well as the number of packet loss are collected. Our simulation environment is NS-2 2.34, which is commonly used in P2P simulation environment . We modify internal code of C++ to enable our protocol. We conduct simulations alongside TCP/IP layer. Consequently, it can only tolerate less than 500 clients. We simulate CCWS with the varying numbers (< 500) of clients and for each number of clients with the varying parameter (i.e., the probability that the Web server directly delivers the requested item). The simulation shows that the amount of server traffic mainly depends on parameter and does not depend on the number of clients, which indicates that the results may still hold for the real Web environment with more than 500 clients. To show that CCWS is sustainable against the massive traffic consumption, we constitute an abrupt traffic surge to the Web server, where clients simultaneously and consistently generate request messages for every 10 s. The simulation results show that CCWS is resistant to such a traffic surge only with slight increase in request drop rate. The rest of this section presents our simulation results. Sections 4.2.2 and 4.2.3 show the comparison results of Web service traffic between server–client and CCWS. In these simulations, we change either the size of items or the number of clients, whereas other variables are fixed. In Section 4.2.4, we conduct the same test but change the server parameter from 0.2 to 1. To check the stability of our protocol, we define , which means the proportion of user in Section 4.2.5. Section 4.2.6 demonstrates the influence of both the size of Web items and the number of clients over the traffic of the Web server.
4.2.2 By increasing the amount of items
Figure 3 shows the server traffic comparison between the server–client Web service model and CCWS. In this simulation, we set ?, that is, and , and increase from 19 to 180 kB For 10 s, every client simultaneously requests items to server in both server–client Web service mode and CCWS mode( ). The left part of Figure 3 is the total traffic of server for 10 s. When the amount of Web contents is large, the server traffic of the server–client model is more rapidly increased than CCWS up to approximately 60 kB. After 88 kB, both server–client model and CCWS cannot tolerate the number of packets and drops them as also shown in the right of Figure 3. We measured drop rate as
However, CCWS tolerates congested status longer than server–client model and reserves its bandwidth up to approximately 30 kB.
4.2.3 By increasing the number of clients
Figure 4 shows similar comparison, but we fixed kB and and adjusted from 1 to 46. Consequently, grows from 10 to 460 in this simulation. With these increased, the traffic of Web server has similar aspect with that in Figure 3. Also, both simulation models suffer the massive traffic, and the responses for clients are delayed. However, the drop rate of our scheme is slightly increased, and it is remarkably lower than that in Figure 3. In fact, this phenomenon reflects the property of P2P network, that is, with more participants, networks would be more stabilized.
4.2.4 Relation: , , and
In this simulation, we fixed and adjusted the from 0.2 to 1 and , where and in Figure 5. With every reduced by 0.2, the server traffic is reduced by approximately 20%. When kB, the server traffic for 10 s is recorded by ( : 1574.7 kB), ( : 3036.1 kB), ( : 4682.1 kB), ( : 6235.8 kB) and ( : 7789.5 kB), respectively.
In Figure 6, has variability from 0.2 to 1, whereas increased from 20 to 110 with kB. As the number of clients increased, the efficiency of lower value is distinguished. The aspect of each plot is very similar in both Figures 5 and 6.
4.2.5 Relation: and
We also experimented the mutual relation between and Rreq. Rreq means the rate of the clients that requests some items to the Web server. For example, when Ntotal = 100 and Rreq = 0.7, 30 clients in the network stores items and 70 clients would requests items for themselves. Figure 7 is a simulation result of this. According to the result, our protocol is stable regardless of Rreq. This experiment is to verify the resistance of our scheme against Slashdot effect. Namely, even when Rreq = 1 (i.e., the worst case), the traffic of the Web server has no irregular condition.
4.2.6 Relation: and
We have seen that both two factors and increase the traffic of the Web server. In addition, we have demonstrated that the traffic of a Web server can be controlled by adjusting through our simulations. The purpose of a simulation of this section is that our protocol efficiently conserves the traffic of the Web server when the number of client increases rather than when the amount of items grows. Therefore, we experimented the consequence of and to the traffic. is set to 0.7, and we measured the traffic compared with both (fixed and variable ) and (variable and fixed ). In Figure 8, the traffic naturally increased in both cases, but there is a noticeable issue. While simulating (x-axis) and kB (y-axis), the traffic of server side is 808 kB. Contrastively, when (x-axis) and kB (y-axis) are used, the traffic is 404 kB. If the server does not use CCWS, the traffic of the former should be 21 × 71 kB = 1491 kB and the latter should be 77 × 19 kB = 1463 kB. Note that the latter case needs slightly more bandwidth for Web server. Nonetheless, the traffic is lower than that in the former case when the Web server uses CCWS.
Discussions: The increased amount of traffic in a Web server mainly comes from both the increase in the size of items and the increase in the number of requests. According to the simulation results, the traffic of the Web server is naturally increased by both factors. From the point of the Web server's view, if the Web server wants to conserve its bandwidth for any reason, it can make an effort to reduce the size of items. However, it is difficult for the Web server to overcome the surge in requests. Namely, handling increasing clients' requests is the most important issue for the availability of the Web server. We have shown that our scheme can efficiently reduce the traffic against the increase in both the size of the Web server and the number of requests. Moreover, the results show that our scheme is even more efficient in case of the increase in the number of requests than the increase in the size of Web items, which is desirable in practical environments.
4.2.7 Resistance to churn condition
We also measured the average time that all clients get the objects of the Web server under the churn condition. In this simulation environment, we set the two major parameters, which are the proportion of adversaries in total clients and the size of . “The proportion of adversaries” means the clients of the proportion in total clients will reject or not respond the request from other valid clients. The goal of simulation is to show that the increasing number of that proportion makes the CCWS unstable, for example, clients should wait longer to get all objects in the Web server, but extending the size of can reduce the waiting time of clients. We used , and the Web server has 10 objects (each size is 50 kB). When a requesting client is redirected, the client waits for the response during 1 s after requesting to another peer. If the request is not reached or not denied, the client requests the object to the next peer in . If requests to all of failed, the client re-requests that object to the Web server. Figure 9 shows the result of our simulation. As shown in the figure, high proportion of adversaries increases the average waiting time of clients. However, the longer size of will make the waiting time be shorter. The waiting time is certainly reduced.
Availability is the most sensitive issue in the Web service. In this paper, we proposed the architecture and the protocol for the Web service, called client cloud Web service (CCWS), for the availability. To easily integrate CCWS for the current system, the CCWS layer is placed underneath HTTP layer, and it provides functionalities such as sharing items of the Web server. By using CCWS, the Web server can remarkably reduce its traffic of Web service under various circumstances. Also, we have shown that our approach is practical, irrespective of Web item size and the number of nodes. Although we believe the simulation result is reasonable, further verification is also needed to verify practicability for the real environment. Thus, our future work is aimed to deploy proposed protocol to realistic test bed suchlike PlanetLab.
This work was supported by the IT R&D program of MKE/KEIT. (10035125, Development of Smart Border Router), and this research was supported by Next-Generation Information Computing Development Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology (2011–0020516).