]> asedeno.scripts.mit.edu Git - linux.git/commitdiff
svcrdma: Do not send XDR roundup bytes for a write chunk
authorChuck Lever <chuck.lever@oracle.com>
Thu, 12 Nov 2015 14:44:33 +0000 (09:44 -0500)
committerJ. Bruce Fields <bfields@redhat.com>
Mon, 23 Nov 2015 19:15:30 +0000 (12:15 -0700)
Minor optimization: when dealing with write chunk XDR roundup, do
not post a Write WR for the zero bytes in the pad. Simply update
the write segment in the RPC-over-RDMA header to reflect the extra
pad bytes.

The Reply chunk is also a write chunk, but the server does not use
send_write_chunks() to send the Reply chunk. That's OK in this case:
the server Upper Layer typically marshals the Reply chunk contents
in a single contiguous buffer, without a separate tail for the XDR
pad.

The comments and the variable naming refer to "chunks" but what is
really meant is "segments." The existing code sends only one
xdr_write_chunk per RPC reply.

The fix assumes this as well. When the XDR pad in the first write
chunk is reached, the assumption is the Write list is complete and
send_write_chunks() returns.

That will remain a valid assumption until the server Upper Layer can
support multiple bulk payload results per RPC.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
net/sunrpc/xprtrdma/svc_rdma_sendto.c

index 969a1ab75fc3c5fb8011157e4f57e8d08f560b42..bad5eaa9f812befe43955ebd9f03d6d0eec3e3fb 100644 (file)
@@ -342,6 +342,13 @@ static int send_write_chunks(struct svcxprt_rdma *xprt,
                                                arg_ch->rs_handle,
                                                arg_ch->rs_offset,
                                                write_len);
+
+               /* Do not send XDR pad bytes */
+               if (chunk_no && write_len < 4) {
+                       chunk_no++;
+                       break;
+               }
+
                chunk_off = 0;
                while (write_len) {
                        ret = send_write(xprt, rqstp,