Patchwork [v2,bpf-next,2/2] bpf: sockmap: initialize sg table entries properly

login
register
mail settings
Submitter Prashant Bhole
Date March 30, 2018, 12:21 a.m.
Message ID <20180330002100.5724-3-bhole_prashant_q7@lab.ntt.co.jp>
Download mbox | patch
Permalink /patch/488695/
State New
Headers show

Comments

Prashant Bhole - March 30, 2018, 12:21 a.m.
When CONFIG_DEBUG_SG is set, sg->sg_magic is initialized in
sg_init_table() and it is verified in sg api while navigating. We hit
BUG_ON when magic check is failed.

In functions sg_tcp_sendpage and sg_tcp_sendmsg, the struct containing
the scatterlist is already zeroed out. So to avoid extra memset, we
use sg_init_marker() to initialize sg_magic.

Fixed following things:
- In bpf_tcp_sendpage: initialize sg using sg_init_marker
- In bpf_tcp_sendmsg: Replace sg_init_table with sg_init_marker
- In bpf_tcp_push: Replace memset with sg_init_table where consumed
  sg entry needs to be re-initialized.

Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
---
 kernel/bpf/sockmap.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)
John Fastabend - March 30, 2018, 3:37 a.m.
On 03/29/2018 05:21 PM, Prashant Bhole wrote:
> When CONFIG_DEBUG_SG is set, sg->sg_magic is initialized in
> sg_init_table() and it is verified in sg api while navigating. We hit
> BUG_ON when magic check is failed.
> 
> In functions sg_tcp_sendpage and sg_tcp_sendmsg, the struct containing
> the scatterlist is already zeroed out. So to avoid extra memset, we
> use sg_init_marker() to initialize sg_magic.
> 
> Fixed following things:
> - In bpf_tcp_sendpage: initialize sg using sg_init_marker
> - In bpf_tcp_sendmsg: Replace sg_init_table with sg_init_marker
> - In bpf_tcp_push: Replace memset with sg_init_table where consumed
>   sg entry needs to be re-initialized.
> 
> Signed-off-by: Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
> ---
>  kernel/bpf/sockmap.c | 13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 

Acked-by: John Fastabend <john.fastabend@gmail.com>

Patch

diff --git a/kernel/bpf/sockmap.c b/kernel/bpf/sockmap.c
index 69c5bccabd22..b4f01656c452 100644
--- a/kernel/bpf/sockmap.c
+++ b/kernel/bpf/sockmap.c
@@ -312,7 +312,7 @@  static int bpf_tcp_push(struct sock *sk, int apply_bytes,
 			md->sg_start++;
 			if (md->sg_start == MAX_SKB_FRAGS)
 				md->sg_start = 0;
-			memset(sg, 0, sizeof(*sg));
+			sg_init_table(sg, 1);
 
 			if (md->sg_start == md->sg_end)
 				break;
@@ -656,7 +656,7 @@  static int bpf_tcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
 	}
 
 	sg = md.sg_data;
-	sg_init_table(sg, MAX_SKB_FRAGS);
+	sg_init_marker(sg, MAX_SKB_FRAGS);
 	rcu_read_unlock();
 
 	lock_sock(sk);
@@ -763,10 +763,14 @@  static int bpf_tcp_sendpage(struct sock *sk, struct page *page,
 
 	lock_sock(sk);
 
-	if (psock->cork_bytes)
+	if (psock->cork_bytes) {
 		m = psock->cork;
-	else
+		sg = &m->sg_data[m->sg_end];
+	} else {
 		m = &md;
+		sg = m->sg_data;
+		sg_init_marker(sg, MAX_SKB_FRAGS);
+	}
 
 	/* Catch case where ring is full and sendpage is stalled. */
 	if (unlikely(m->sg_end == m->sg_start &&
@@ -774,7 +778,6 @@  static int bpf_tcp_sendpage(struct sock *sk, struct page *page,
 		goto out_err;
 
 	psock->sg_size += size;
-	sg = &m->sg_data[m->sg_end];
 	sg_set_page(sg, page, size, offset);
 	get_page(page);
 	m->sg_copy[m->sg_end] = true;