The role of new fresh proxies is to allow one hand to save the consumption bandwidth, since in the ideal case, for many queries on the same document, a single query that results in a “sort” the local network. On the other hand, for the user, proxy allows acceleration in the provision of documents found in the cache and inducing an increase in the apparent speed of download documents.
However, the exponential proliferation of documents available on the Web quickly motivated major Internet players to find solutions to avoid saturation and slowing the network. Indeed, the http protocol is very widely established as standard data transport for most client / server applications in the mid 90s. The simplicity of development and deployment is almost obvious choice in most cases. In addition, the emergence of new tools such as the Java language and the multiplication of bookstores still favor the choice of new applications based on Web technologies, whether in the context of banking, industry, etc… But the proliferation of services to this day still eludes the forms of optimization that can be installed currently in network infrastructure.
The caching proxies have responded very quickly to problems and were widely deployed on the Internet. The first form, very practical, meets a certain number of requirements: lower consumption of bandwidth and acceleration the apparent speed of the network. But the situation is far from ideal and it lists many problems and limitations associated with new fresh proxies. These problems are related to the development of distributed documents on the Web (audio, video, etc…) and especially increased interactivity driven by a set of Web services increasingly more advanced. Thus, As the Web evolves of the simple distribution of documents to the online agents; the effectiveness of caching proxies deteriorates.
Faced with these developments, there are roughly two types of approaches. On the one hand, standards that are part of a package proposed by the W3C for example with all variety of specifications for XML, the passage of HTTP 1.0 to HTTP 1.1 and, the other competing solutions that diverge from the Internet and open standards offer solutions adapted to specific needs, but drastically restricting the universality of the Web. In the latter category is by example the case of language ESI (Edge Side Include). This movement reflects the need to develop more local solutions for specific applications, this movement operating in intranets where local and particular solutions rely on Internet technologies.
In this context new fresh proxies have certainly a role to play. They are widely widespread since almost find all intranets exit doors. But for the moment, the main limitation of proxies lies in a lack of adaptation and openness to their neighborhood more direct: the community of which they are the center. If we can overcome these difficulties, then we enrich a platform that exists in many parts of the network. With these enhancements in service widely widespread, we will be able to suggest improvements and new services numerous communities, if not to all users.
There are of course other checkpoints scattered throughout the network architectures: router, firewall, tunnel, etc… These are; however, lower in the OSI layers (transport, network) as new fresh proxies work at the HTTP protocol and therefore application layer.