Rpc consists of how many basic components
You Might Like: What is Semaphore? Report a Bug. Previous Prev. Next Continue. Home Testing Expand child menu Expand. SAP Expand child menu Expand. Web Expand child menu Expand. Must Learn Expand child menu Expand. Big Data Expand child menu Expand.
There is also a bit version number, but we ignore that in the following discussion. For example, the NFS server has been assigned program number x , and within this program getattr is procedure 1 , setattr is procedure 2 , read is procedure 6 , write is procedure 8 , and so on. The server—which may support several program numbers—is responsible for calling the specified procedure of the specified program. A SunRPC request really represents a request to call the specified program and procedure on the particular machine to which the request was sent, even though the same program number may be implemented on other machines in the same network.
SunRPC header formats: a request; b reply. Different program numbers may belong to different servers on the same machine. These different servers have different transport layer demux keys e.
These demux keys are called transport selectors. How can a SunRPC client that wants to talk to a particular program determine which transport selector to use to reach the corresponding server? The solution is to assign a well-known address to just one program on the remote machine and let that program handle the task of telling clients which transport selector to use to reach any other program on the machine.
Its program number is x , and its well-known port is The client also caches the program-to-port number mapping so that it need not go back to the Port Mapper each time it wants to talk to the NFS program. To match up a reply message with the corresponding request, so that the result of the RPC can be returned to the correct caller, both request and reply message headers include a XID transaction ID field, as in Figure After the server has successfully replied to a given request, it does not remember the XID.
Because of this, SunRPC cannot guarantee at-most-once semantics. It does not implement its own reliability, so it is only reliable if the underlying transport is reliable. The ability to send request and reply messages that are larger than the network MTU is also dependent on the underlying transport.
In other words, SunRPC does not make any attempt to improve on the underlying transport when it comes to reliability and message size. Since SunRPC can run over many different transport protocols, this gives it considerable flexibility without complicating the design of the RPC protocol itself.
Returning to the SunRPC header format of Figure , the request message contains variable-length Credentials and Verifier fields, both of which are used by the client to authenticate itself to the server—that is, to give evidence that the client has the right to invoke the server.
How a client authenticates itself to a server is a general issue that must be addressed by any protocol that wants to provide a reasonable level of security.
This topic is discussed in more detail in another chapter. There are some other differences between the two approaches, which we will highlight in the following paragraphs. The client sends a Request message, the server eventually replies with a Response message, and the client acknowledges Ack the response.
Instead of the server acknowledging the request messages, however, the client periodically sends a Ping message to the server, which responds with a Working message to indicate that the remote procedure is still in progress. Although not shown in the figure, other message types are also supported. For example, the client can send a Quit message to the server, asking it to abort an earlier call that is still in progress; the server responds with a Quack quit acknowledgment message.
Also, the server can respond to a Request message with a Reject message indicating that a call has been rejected , and it can respond to a Ping message with a Nocall message indicating that the server has never heard of the caller.
At any given time, there can be only one message transaction active on a given channel. The FragmentNum field uniquely identifies each fragment that makes up a given request or reply message.
Both the client and server implement a selective acknowledgment mechanism, which works as follows. We describe the mechanism in terms of a client sending a fragmented request message to the server; the same mechanism applies when a server sends a fragment response to the client.
First, each fragment that makes up the request message contains both a unique FragmentNum and a flag indicating whether this packet is a fragment of a call frag or the last fragment of a call ; request messages that fit in a single packet carry a flag. The server knows it has received the complete request message when it has the packet and there are no gaps in the fragment numbers. Second, in response to each arriving fragment, the server sends a Fack fragment acknowledgment message to the client.
This acknowledgment identifies the highest fragment number that the server has successfully received. In other words, the acknowledgment is cumulative, much like in TCP. In addition, however, the server selectively acknowledges any higher fragment numbers it has received out of order. It does so with a bit vector that identifies these out-of-order fragments relative to the highest in-order fragment it has received.
Finally, the client responds by retransmitting the missing fragments. Figure illustrates how this all works. Suppose the server has successfully received fragments up through number 20, plus fragments 23, 25, and So as to support an almost arbitrarily long bit vector, the size of the vector measured in bit words is given in the SelAckLen field.
Fragmentation with selective acknowledgments. Specifically, each Fack message not only acknowledges received fragments but also informs the sender of how many fragments it may now send. Given the complexity of congestion control, it is perhaps not surprising that some RPC protocols avoid it by avoiding fragmentation. In summary, designers have quite a range of options open to them when designing an RPC protocol.
SunRPC takes the more minimalist approach and adds relatively little to the underlying transport beyond the essentials of locating the right procedure and identifying messages. DCE-RPC adds more functionality, with the possibility of improved performance in some environments at the cost of greater complexity.
For version 1. Googlers are wild and crazy people. The difference is essentially an extra level of indirection. One server process is presumed to be enough to serve calls from all the client processes that might call it. With cloud services, the client invokes a method on a service , which in order to support calls from arbitrarily many clients at the same time, is implemented by a scalable number of server processes, each potentially running on a different server machine.
This is where the cloud comes into play: datacenters make a seemingly infinite number of server machines available to scale up cloud services. Using RPC to invoke a scalable cloud service. In the RPC model, you can formally specify an interface to the remote procedures using a language designed for this purpose.
After you create an interface, you must pass it through the MIDL compiler. This compiler generates the stubs that translate local procedure calls into remote procedure calls. Stubs are placeholder functions that make the calls to the run-time library functions, which manage the remote procedure call. The advantage of this approach is that the network becomes almost completely transparent to your distributed application. Your client program calls what appear to be local procedures; the work of turning them into remote calls is done for you automatically.
All the code that translates data , accesses the network, and retrieves results is generated for you by the MIDL compiler and is invisible to your application. In the local model, the caller places arguments to a procedure in a specified location such as a result register. Then, the caller transfers control to the procedure.
The caller eventually regains control, extracts the results of the procedure, and continues execution. RPC works in a similar manner, in that one thread of control winds logically through two processes: the caller process and the server process.
First, the caller process sends a call message that includes the procedure parameters to the server process. Then, the caller process waits for a reply message blocks. Next, a process on the server side, which is dormant until the arrival of the call message, extracts the procedure parameters, computes the results, and sends a reply message. The server waits for the next call message.
Finally, a process on the caller receives the reply message, extracts the results of the procedure, and the caller resumes execution. Figure 2. In the RPC model, only one of the two processes is active at any given time. Furthermore, this model is only an example. The RPC protocol makes no restrictions on the concurrency model implemented, and others are possible. For example, an implementation can choose asynchronous Remote Procedure Calls so that the client can continue working while waiting for a reply from the server.
Additionally, the server can create a task to process incoming requests and thereby remain free to receive other requests.
0コメント