Ripley in simplest terms clones the client computer on the server, does the computation, verifies the output, gets the response and forwards it to the user. It is another layer added between the user & the server but part of the server-end. This layer is a clone of the client and another module which the research paper calls as Ripley Checker. So what used to be better performance of AJAX since it was done at the user’s end will be thrown back at the Server’s end yet maintaining the same levels of performance.
Ripley’s Architecture (from the research paper):
Legend: S = Server; C’ = Client & C = cloned client
Capture user events: RIPLEY augments the client to capture user events within the browser.
Transmit events to the server for replay: The client run-time is modi?ed to transmit user events to the client’s replica C for replay.
Compare server and client results: The server component S is augmented with a RIPLEY checker that compares arriving RPCs m’ and m received from the client C’ and server-based client replica C, respectively, looking for discrepancies.
Essentially the user’s actions & applications are replicated on C. The interaction between S & C is checked by Ripley Checker & the result is given back to the user. The user’s PC basically does nothing I guess. Something like taking the concept of Thin Clients to the Web, maybe?
What happens at the server-end is basically verification by comparing the incoming data with ideal data generated at the receiver. A concept a lot of engineers are familiar with. Ripley’s (method of security might be the future) Believe It Or Not.