]> asedeno.scripts.mit.edu Git - linux.git/commit
net: openvswitch: optimize flow-mask looking up
authorTonghao Zhang <xiangxia.m.yue@gmail.com>
Fri, 1 Nov 2019 14:23:49 +0000 (22:23 +0800)
committerDavid S. Miller <davem@davemloft.net>
Mon, 4 Nov 2019 01:18:03 +0000 (17:18 -0800)
commit57f7d7b9164426c496300d254fd5167fbbf205ea
treeb5605cfd801ce05951f89196c937485bf76913c2
parenta7f35e78e701744368d4ac38bdb61a86bfac2162
net: openvswitch: optimize flow-mask looking up

The full looking up on flow table traverses all mask array.
If mask-array is too large, the number of invalid flow-mask
increase, performance will be drop.

One bad case, for example: M means flow-mask is valid and NULL
of flow-mask means deleted.

+-------------------------------------------+
| M | NULL | ...                  | NULL | M|
+-------------------------------------------+

In that case, without this patch, openvswitch will traverses all
mask array, because there will be one flow-mask in the tail. This
patch changes the way of flow-mask inserting and deleting, and the
mask array will be keep as below: there is not a NULL hole. In the
fast path, we can "break" "for" (not "continue") in flow_lookup
when we get a NULL flow-mask.

         "break"
            v
+-------------------------------------------+
| M | M |  NULL |...           | NULL | NULL|
+-------------------------------------------+

This patch don't optimize slow or control path, still using ma->max
to traverse. Slow path:
* tbl_mask_array_realloc
* ovs_flow_tbl_lookup_exact
* flow_mask_find

Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Tested-by: Greg Rose <gvrose8192@gmail.com>
Acked-by: Pravin B Shelar <pshelar@ovn.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
net/openvswitch/flow_table.c