Patchwork [dpdk-dev] net/nfp: add CPP bridge as service

login
register
mail settings
Submitter Alejandro Lucero
Date Jan. 11, 2019, 1:25 p.m.
Message ID <20190111132553.10683-1-alejandro.lucero@netronome.com>
Download mbox | patch
Permalink /patch/697677/
State New
Headers show

Comments

Alejandro Lucero - Jan. 11, 2019, 1:25 p.m.
The Netronome's Network Flow Processor chip is highly programmable
with the goal of processing packets at high speed. Processing units
and other chip components are available from the host through the
PCIe CPP(Command Push Pull bus) interface. The NFP PF PMD configures
a CPP handler for setting up and working with vNICs, perform actions
like link up or down, or accessing extended stats from the MAC component.

There exist NFP host tools which access the NFP components for
programming and debugging but they require the CPP interface. When the
PMD is bound to the PF, the DPDK app owns the CPP interface, so these
host tools can not access the NFP through other means like NFP kernel
drivers.

This patch adds a CPP bridge using the rte_service API which can be
enabled by a DPDK app. Interestingly, DPDK clients like OVS will not
enable specific service cores, but this can be performed with a
secondary process specifically enabling this CPP bridge service and
therefore giving access to the NFP to those host tools.

v2:
 - Avoid printfs for debugging
 - fix compilation problems for powerpc

Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
---
 drivers/net/nfp/nfp_net.c      | 379 ++++++++++++++++++++++++++++++++-
 drivers/net/nfp/nfp_net_logs.h |   9 +
 drivers/net/nfp/nfp_net_pmd.h  |   1 +
 3 files changed, 388 insertions(+), 1 deletion(-)
Ferruh Yigit - Jan. 11, 2019, 4:42 p.m.
On 1/11/2019 1:25 PM, Alejandro Lucero wrote:
> The Netronome's Network Flow Processor chip is highly programmable
> with the goal of processing packets at high speed. Processing units
> and other chip components are available from the host through the
> PCIe CPP(Command Push Pull bus) interface. The NFP PF PMD configures
> a CPP handler for setting up and working with vNICs, perform actions
> like link up or down, or accessing extended stats from the MAC component.
> 
> There exist NFP host tools which access the NFP components for
> programming and debugging but they require the CPP interface. When the
> PMD is bound to the PF, the DPDK app owns the CPP interface, so these
> host tools can not access the NFP through other means like NFP kernel
> drivers.
> 
> This patch adds a CPP bridge using the rte_service API which can be
> enabled by a DPDK app. Interestingly, DPDK clients like OVS will not
> enable specific service cores, but this can be performed with a
> secondary process specifically enabling this CPP bridge service and
> therefore giving access to the NFP to those host tools.
> 
> v2:
>  - Avoid printfs for debugging
>  - fix compilation problems for powerpc
> 
> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>

Applied to dpdk-next-net/master, thanks.
Thomas Monjalon - Jan. 13, 2019, 9:41 p.m.
11/01/2019 17:42, Ferruh Yigit:
> On 1/11/2019 1:25 PM, Alejandro Lucero wrote:
> > The Netronome's Network Flow Processor chip is highly programmable
> > with the goal of processing packets at high speed. Processing units
> > and other chip components are available from the host through the
> > PCIe CPP(Command Push Pull bus) interface. The NFP PF PMD configures
> > a CPP handler for setting up and working with vNICs, perform actions
> > like link up or down, or accessing extended stats from the MAC component.
> > 
> > There exist NFP host tools which access the NFP components for
> > programming and debugging but they require the CPP interface. When the
> > PMD is bound to the PF, the DPDK app owns the CPP interface, so these
> > host tools can not access the NFP through other means like NFP kernel
> > drivers.
> > 
> > This patch adds a CPP bridge using the rte_service API which can be
> > enabled by a DPDK app. Interestingly, DPDK clients like OVS will not
> > enable specific service cores, but this can be performed with a
> > secondary process specifically enabling this CPP bridge service and
> > therefore giving access to the NFP to those host tools.
> > 
> > v2:
> >  - Avoid printfs for debugging
> >  - fix compilation problems for powerpc
> > 
> > Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> 
> Applied to dpdk-next-net/master, thanks.

It does not compile with 32-bit toolchain.

Please check the occurences of %lu, thanks.
Ferruh Yigit - Jan. 14, 2019, 10:40 a.m.
On 1/13/2019 9:41 PM, Thomas Monjalon wrote:
> 11/01/2019 17:42, Ferruh Yigit:
>> On 1/11/2019 1:25 PM, Alejandro Lucero wrote:
>>> The Netronome's Network Flow Processor chip is highly programmable
>>> with the goal of processing packets at high speed. Processing units
>>> and other chip components are available from the host through the
>>> PCIe CPP(Command Push Pull bus) interface. The NFP PF PMD configures
>>> a CPP handler for setting up and working with vNICs, perform actions
>>> like link up or down, or accessing extended stats from the MAC component.
>>>
>>> There exist NFP host tools which access the NFP components for
>>> programming and debugging but they require the CPP interface. When the
>>> PMD is bound to the PF, the DPDK app owns the CPP interface, so these
>>> host tools can not access the NFP through other means like NFP kernel
>>> drivers.
>>>
>>> This patch adds a CPP bridge using the rte_service API which can be
>>> enabled by a DPDK app. Interestingly, DPDK clients like OVS will not
>>> enable specific service cores, but this can be performed with a
>>> secondary process specifically enabling this CPP bridge service and
>>> therefore giving access to the NFP to those host tools.
>>>
>>> v2:
>>>  - Avoid printfs for debugging
>>>  - fix compilation problems for powerpc
>>>
>>> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
>>
>> Applied to dpdk-next-net/master, thanks.
> 
> It does not compile with 32-bit toolchain.
> 
> Please check the occurences of %lu, thanks.

Hi Thomas,

We aware the build error, but let it because nfp doesn't support 32-bit.

But I just recognized that it is enabled by default on 32-bit default configs,
we should disable them.


Hi Alejandro,

Can you please disable nfp driver explicitly on
'defconfig_i686-native-linuxapp-*' config files, perhaps also on
'defconfig_x86_x32-native-linuxapp-gcc' too?

I will drop the existing patch from next-net.

And if it is possible to fix the build error, specially if it is just for %lu of
the logging, I prefer the fix against the config update, but it is up to you.

Thanks,
ferruh
Alejandro Lucero - Jan. 14, 2019, 6 p.m.
On Mon, Jan 14, 2019 at 10:40 AM Ferruh Yigit <ferruh.yigit@intel.com>
wrote:

> On 1/13/2019 9:41 PM, Thomas Monjalon wrote:
> > 11/01/2019 17:42, Ferruh Yigit:
> >> On 1/11/2019 1:25 PM, Alejandro Lucero wrote:
> >>> The Netronome's Network Flow Processor chip is highly programmable
> >>> with the goal of processing packets at high speed. Processing units
> >>> and other chip components are available from the host through the
> >>> PCIe CPP(Command Push Pull bus) interface. The NFP PF PMD configures
> >>> a CPP handler for setting up and working with vNICs, perform actions
> >>> like link up or down, or accessing extended stats from the MAC
> component.
> >>>
> >>> There exist NFP host tools which access the NFP components for
> >>> programming and debugging but they require the CPP interface. When the
> >>> PMD is bound to the PF, the DPDK app owns the CPP interface, so these
> >>> host tools can not access the NFP through other means like NFP kernel
> >>> drivers.
> >>>
> >>> This patch adds a CPP bridge using the rte_service API which can be
> >>> enabled by a DPDK app. Interestingly, DPDK clients like OVS will not
> >>> enable specific service cores, but this can be performed with a
> >>> secondary process specifically enabling this CPP bridge service and
> >>> therefore giving access to the NFP to those host tools.
> >>>
> >>> v2:
> >>>  - Avoid printfs for debugging
> >>>  - fix compilation problems for powerpc
> >>>
> >>> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com>
> >>
> >> Applied to dpdk-next-net/master, thanks.
> >
> > It does not compile with 32-bit toolchain.
> >
> > Please check the occurences of %lu, thanks.
>
> Hi Thomas,
>
> We aware the build error, but let it because nfp doesn't support 32-bit.
>
> But I just recognized that it is enabled by default on 32-bit default
> configs,
> we should disable them.
>
>
> Hi Alejandro,
>
> Can you please disable nfp driver explicitly on
> 'defconfig_i686-native-linuxapp-*' config files, perhaps also on
> 'defconfig_x86_x32-native-linuxapp-gcc' too?
>
> I will drop the existing patch from next-net.
>
>
Ok. I'll do asap.


> And if it is possible to fix the build error, specially if it is just for
> %lu of
> the logging, I prefer the fix against the config update, but it is up to
> you.
>
>
I did not see any logging error/warning when compiling nor any when using
checkpatch. I have used a gcc 7.3.1 (Ubuntu) and a 8.2.1 (RH). What are you
using for triggering such error?


> Thanks,
> ferruh
>
Ferruh Yigit - Jan. 14, 2019, 6:22 p.m.
On 1/14/2019 6:00 PM, Alejandro Lucero wrote:
> 
> 
> On Mon, Jan 14, 2019 at 10:40 AM Ferruh Yigit <ferruh.yigit@intel.com
> <mailto:ferruh.yigit@intel.com>> wrote:
> 
>     On 1/13/2019 9:41 PM, Thomas Monjalon wrote:
>     > 11/01/2019 17:42, Ferruh Yigit:
>     >> On 1/11/2019 1:25 PM, Alejandro Lucero wrote:
>     >>> The Netronome's Network Flow Processor chip is highly programmable
>     >>> with the goal of processing packets at high speed. Processing units
>     >>> and other chip components are available from the host through the
>     >>> PCIe CPP(Command Push Pull bus) interface. The NFP PF PMD configures
>     >>> a CPP handler for setting up and working with vNICs, perform actions
>     >>> like link up or down, or accessing extended stats from the MAC component.
>     >>>
>     >>> There exist NFP host tools which access the NFP components for
>     >>> programming and debugging but they require the CPP interface. When the
>     >>> PMD is bound to the PF, the DPDK app owns the CPP interface, so these
>     >>> host tools can not access the NFP through other means like NFP kernel
>     >>> drivers.
>     >>>
>     >>> This patch adds a CPP bridge using the rte_service API which can be
>     >>> enabled by a DPDK app. Interestingly, DPDK clients like OVS will not
>     >>> enable specific service cores, but this can be performed with a
>     >>> secondary process specifically enabling this CPP bridge service and
>     >>> therefore giving access to the NFP to those host tools.
>     >>>
>     >>> v2:
>     >>>  - Avoid printfs for debugging
>     >>>  - fix compilation problems for powerpc
>     >>>
>     >>> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com
>     <mailto:alejandro.lucero@netronome.com>>
>     >>
>     >> Applied to dpdk-next-net/master, thanks.
>     >
>     > It does not compile with 32-bit toolchain.
>     >
>     > Please check the occurences of %lu, thanks.
> 
>     Hi Thomas,
> 
>     We aware the build error, but let it because nfp doesn't support 32-bit.
> 
>     But I just recognized that it is enabled by default on 32-bit default configs,
>     we should disable them.
> 
> 
>     Hi Alejandro,
> 
>     Can you please disable nfp driver explicitly on
>     'defconfig_i686-native-linuxapp-*' config files, perhaps also on
>     'defconfig_x86_x32-native-linuxapp-gcc' too?
> 
>     I will drop the existing patch from next-net.
> 
> 
> Ok. I'll do asap.
>  
> 
>     And if it is possible to fix the build error, specially if it is just for %lu of
>     the logging, I prefer the fix against the config update, but it is up to you.
> 
> 
> I did not see any logging error/warning when compiling nor any when using
> checkpatch. I have used a gcc 7.3.1 (Ubuntu) and a 8.2.1 (RH). What are you
> using for triggering such error?

Using 'i686-native-linuxapp-gcc' config, which is for 32-bit, gives following
build error [1] with this patch.

This is mainly just because logging and can be fixed by using %z for size_t and
use PRIx64 similar macros for fixed size variables.

For "cpp_id = (offset >> 40) << 8;" build error you need to use fixed size
version of the variable, like "uint64_t offset"

[1]
.../drivers/net/nfp/nfp_net.c: In function ‘nfp_cpp_bridge_serve_write’:
.../drivers/net/nfp/nfp_net.c:2979:32: error: format ‘%lu’ expects argument of
type ‘long unsigned int’, but argument 6 has type ‘unsigned int’ [-Werror=format=]
   sizeof(off_t), sizeof(size_t));
                                ^
.../build/include/rte_log.h:324:25: note: in definition of macro ‘RTE_LOG’
    RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__)
                         ^
.../drivers/net/nfp/nfp_net.c:2978:2: note: in expansion of macro ‘PMD_CPP_LOG’
  PMD_CPP_LOG(DEBUG, "%s: offset size %lu, count_size: %lu\n", __func__,
  ^~~~~~~~~~~
>  
> 
>     Thanks,
>     ferruh
>
Alejandro Lucero - Jan. 14, 2019, 6:29 p.m.
On Mon, Jan 14, 2019 at 6:22 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:

> On 1/14/2019 6:00 PM, Alejandro Lucero wrote:
> >
> >
> > On Mon, Jan 14, 2019 at 10:40 AM Ferruh Yigit <ferruh.yigit@intel.com
> > <mailto:ferruh.yigit@intel.com>> wrote:
> >
> >     On 1/13/2019 9:41 PM, Thomas Monjalon wrote:
> >     > 11/01/2019 17:42, Ferruh Yigit:
> >     >> On 1/11/2019 1:25 PM, Alejandro Lucero wrote:
> >     >>> The Netronome's Network Flow Processor chip is highly
> programmable
> >     >>> with the goal of processing packets at high speed. Processing
> units
> >     >>> and other chip components are available from the host through the
> >     >>> PCIe CPP(Command Push Pull bus) interface. The NFP PF PMD
> configures
> >     >>> a CPP handler for setting up and working with vNICs, perform
> actions
> >     >>> like link up or down, or accessing extended stats from the MAC
> component.
> >     >>>
> >     >>> There exist NFP host tools which access the NFP components for
> >     >>> programming and debugging but they require the CPP interface.
> When the
> >     >>> PMD is bound to the PF, the DPDK app owns the CPP interface, so
> these
> >     >>> host tools can not access the NFP through other means like NFP
> kernel
> >     >>> drivers.
> >     >>>
> >     >>> This patch adds a CPP bridge using the rte_service API which can
> be
> >     >>> enabled by a DPDK app. Interestingly, DPDK clients like OVS will
> not
> >     >>> enable specific service cores, but this can be performed with a
> >     >>> secondary process specifically enabling this CPP bridge service
> and
> >     >>> therefore giving access to the NFP to those host tools.
> >     >>>
> >     >>> v2:
> >     >>>  - Avoid printfs for debugging
> >     >>>  - fix compilation problems for powerpc
> >     >>>
> >     >>> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com
> >     <mailto:alejandro.lucero@netronome.com>>
> >     >>
> >     >> Applied to dpdk-next-net/master, thanks.
> >     >
> >     > It does not compile with 32-bit toolchain.
> >     >
> >     > Please check the occurences of %lu, thanks.
> >
> >     Hi Thomas,
> >
> >     We aware the build error, but let it because nfp doesn't support
> 32-bit.
> >
> >     But I just recognized that it is enabled by default on 32-bit
> default configs,
> >     we should disable them.
> >
> >
> >     Hi Alejandro,
> >
> >     Can you please disable nfp driver explicitly on
> >     'defconfig_i686-native-linuxapp-*' config files, perhaps also on
> >     'defconfig_x86_x32-native-linuxapp-gcc' too?
> >
> >     I will drop the existing patch from next-net.
> >
> >
> > Ok. I'll do asap.
> >
> >
> >     And if it is possible to fix the build error, specially if it is
> just for %lu of
> >     the logging, I prefer the fix against the config update, but it is
> up to you.
> >
> >
> > I did not see any logging error/warning when compiling nor any when using
> > checkpatch. I have used a gcc 7.3.1 (Ubuntu) and a 8.2.1 (RH). What are
> you
> > using for triggering such error?
>
> Using 'i686-native-linuxapp-gcc' config, which is for 32-bit, gives
> following
> build error [1] with this patch.
>
>
OK. But after the patch I have just sent for removing NFP PMD from 32 bits
builds, nothing is really needed then. Right?

If so, should I send the patch again about the CPP bridge or you can redo
it?


> This is mainly just because logging and can be fixed by using %z for
> size_t and
> use PRIx64 similar macros for fixed size variables.
>
> For "cpp_id = (offset >> 40) << 8;" build error you need to use fixed size
> version of the variable, like "uint64_t offset"
>
> [1]
> .../drivers/net/nfp/nfp_net.c: In function ‘nfp_cpp_bridge_serve_write’:
> .../drivers/net/nfp/nfp_net.c:2979:32: error: format ‘%lu’ expects
> argument of
> type ‘long unsigned int’, but argument 6 has type ‘unsigned int’
> [-Werror=format=]
>    sizeof(off_t), sizeof(size_t));
>                                 ^
> .../build/include/rte_log.h:324:25: note: in definition of macro ‘RTE_LOG’
>     RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__)
>                          ^
> .../drivers/net/nfp/nfp_net.c:2978:2: note: in expansion of macro
> ‘PMD_CPP_LOG’
>   PMD_CPP_LOG(DEBUG, "%s: offset size %lu, count_size: %lu\n", __func__,
>   ^~~~~~~~~~~
> >
> >
> >     Thanks,
> >     ferruh
> >
>
>
Thomas Monjalon - Jan. 14, 2019, 6:54 p.m.
14/01/2019 19:29, Alejandro Lucero:
> On Mon, Jan 14, 2019 at 6:22 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> 
> > On 1/14/2019 6:00 PM, Alejandro Lucero wrote:
> > >
> > >
> > > On Mon, Jan 14, 2019 at 10:40 AM Ferruh Yigit <ferruh.yigit@intel.com
> > > <mailto:ferruh.yigit@intel.com>> wrote:
> > >
> > >     On 1/13/2019 9:41 PM, Thomas Monjalon wrote:
> > >     > 11/01/2019 17:42, Ferruh Yigit:
> > >     >> On 1/11/2019 1:25 PM, Alejandro Lucero wrote:
> > >     >>> The Netronome's Network Flow Processor chip is highly
> > programmable
> > >     >>> with the goal of processing packets at high speed. Processing
> > units
> > >     >>> and other chip components are available from the host through the
> > >     >>> PCIe CPP(Command Push Pull bus) interface. The NFP PF PMD
> > configures
> > >     >>> a CPP handler for setting up and working with vNICs, perform
> > actions
> > >     >>> like link up or down, or accessing extended stats from the MAC
> > component.
> > >     >>>
> > >     >>> There exist NFP host tools which access the NFP components for
> > >     >>> programming and debugging but they require the CPP interface.
> > When the
> > >     >>> PMD is bound to the PF, the DPDK app owns the CPP interface, so
> > these
> > >     >>> host tools can not access the NFP through other means like NFP
> > kernel
> > >     >>> drivers.
> > >     >>>
> > >     >>> This patch adds a CPP bridge using the rte_service API which can
> > be
> > >     >>> enabled by a DPDK app. Interestingly, DPDK clients like OVS will
> > not
> > >     >>> enable specific service cores, but this can be performed with a
> > >     >>> secondary process specifically enabling this CPP bridge service
> > and
> > >     >>> therefore giving access to the NFP to those host tools.
> > >     >>>
> > >     >>> v2:
> > >     >>>  - Avoid printfs for debugging
> > >     >>>  - fix compilation problems for powerpc
> > >     >>>
> > >     >>> Signed-off-by: Alejandro Lucero <alejandro.lucero@netronome.com
> > >     <mailto:alejandro.lucero@netronome.com>>
> > >     >>
> > >     >> Applied to dpdk-next-net/master, thanks.
> > >     >
> > >     > It does not compile with 32-bit toolchain.
> > >     >
> > >     > Please check the occurences of %lu, thanks.
> > >
> > >     Hi Thomas,
> > >
> > >     We aware the build error, but let it because nfp doesn't support
> > 32-bit.
> > >
> > >     But I just recognized that it is enabled by default on 32-bit
> > default configs,
> > >     we should disable them.
> > >
> > >
> > >     Hi Alejandro,
> > >
> > >     Can you please disable nfp driver explicitly on
> > >     'defconfig_i686-native-linuxapp-*' config files, perhaps also on
> > >     'defconfig_x86_x32-native-linuxapp-gcc' too?
> > >
> > >     I will drop the existing patch from next-net.
> > >
> > >
> > > Ok. I'll do asap.
> > >
> > >
> > >     And if it is possible to fix the build error, specially if it is
> > just for %lu of
> > >     the logging, I prefer the fix against the config update, but it is
> > up to you.
> > >
> > >
> > > I did not see any logging error/warning when compiling nor any when using
> > > checkpatch. I have used a gcc 7.3.1 (Ubuntu) and a 8.2.1 (RH). What are
> > you
> > > using for triggering such error?
> >
> > Using 'i686-native-linuxapp-gcc' config, which is for 32-bit, gives
> > following
> > build error [1] with this patch.
> >
> >
> OK. But after the patch I have just sent for removing NFP PMD from 32 bits
> builds, nothing is really needed then. Right?
> 
> If so, should I send the patch again about the CPP bridge or you can redo
> it?

I will re-apply it on master.
Alejandro Lucero - Jan. 14, 2019, 7:26 p.m.
On Mon, Jan 14, 2019 at 6:54 PM Thomas Monjalon <thomas@monjalon.net> wrote:

> 14/01/2019 19:29, Alejandro Lucero:
> > On Mon, Jan 14, 2019 at 6:22 PM Ferruh Yigit <ferruh.yigit@intel.com>
> wrote:
> >
> > > On 1/14/2019 6:00 PM, Alejandro Lucero wrote:
> > > >
> > > >
> > > > On Mon, Jan 14, 2019 at 10:40 AM Ferruh Yigit <
> ferruh.yigit@intel.com
> > > > <mailto:ferruh.yigit@intel.com>> wrote:
> > > >
> > > >     On 1/13/2019 9:41 PM, Thomas Monjalon wrote:
> > > >     > 11/01/2019 17:42, Ferruh Yigit:
> > > >     >> On 1/11/2019 1:25 PM, Alejandro Lucero wrote:
> > > >     >>> The Netronome's Network Flow Processor chip is highly
> > > programmable
> > > >     >>> with the goal of processing packets at high speed. Processing
> > > units
> > > >     >>> and other chip components are available from the host
> through the
> > > >     >>> PCIe CPP(Command Push Pull bus) interface. The NFP PF PMD
> > > configures
> > > >     >>> a CPP handler for setting up and working with vNICs, perform
> > > actions
> > > >     >>> like link up or down, or accessing extended stats from the
> MAC
> > > component.
> > > >     >>>
> > > >     >>> There exist NFP host tools which access the NFP components
> for
> > > >     >>> programming and debugging but they require the CPP interface.
> > > When the
> > > >     >>> PMD is bound to the PF, the DPDK app owns the CPP interface,
> so
> > > these
> > > >     >>> host tools can not access the NFP through other means like
> NFP
> > > kernel
> > > >     >>> drivers.
> > > >     >>>
> > > >     >>> This patch adds a CPP bridge using the rte_service API which
> can
> > > be
> > > >     >>> enabled by a DPDK app. Interestingly, DPDK clients like OVS
> will
> > > not
> > > >     >>> enable specific service cores, but this can be performed
> with a
> > > >     >>> secondary process specifically enabling this CPP bridge
> service
> > > and
> > > >     >>> therefore giving access to the NFP to those host tools.
> > > >     >>>
> > > >     >>> v2:
> > > >     >>>  - Avoid printfs for debugging
> > > >     >>>  - fix compilation problems for powerpc
> > > >     >>>
> > > >     >>> Signed-off-by: Alejandro Lucero <
> alejandro.lucero@netronome.com
> > > >     <mailto:alejandro.lucero@netronome.com>>
> > > >     >>
> > > >     >> Applied to dpdk-next-net/master, thanks.
> > > >     >
> > > >     > It does not compile with 32-bit toolchain.
> > > >     >
> > > >     > Please check the occurences of %lu, thanks.
> > > >
> > > >     Hi Thomas,
> > > >
> > > >     We aware the build error, but let it because nfp doesn't support
> > > 32-bit.
> > > >
> > > >     But I just recognized that it is enabled by default on 32-bit
> > > default configs,
> > > >     we should disable them.
> > > >
> > > >
> > > >     Hi Alejandro,
> > > >
> > > >     Can you please disable nfp driver explicitly on
> > > >     'defconfig_i686-native-linuxapp-*' config files, perhaps also on
> > > >     'defconfig_x86_x32-native-linuxapp-gcc' too?
> > > >
> > > >     I will drop the existing patch from next-net.
> > > >
> > > >
> > > > Ok. I'll do asap.
> > > >
> > > >
> > > >     And if it is possible to fix the build error, specially if it is
> > > just for %lu of
> > > >     the logging, I prefer the fix against the config update, but it
> is
> > > up to you.
> > > >
> > > >
> > > > I did not see any logging error/warning when compiling nor any when
> using
> > > > checkpatch. I have used a gcc 7.3.1 (Ubuntu) and a 8.2.1 (RH). What
> are
> > > you
> > > > using for triggering such error?
> > >
> > > Using 'i686-native-linuxapp-gcc' config, which is for 32-bit, gives
> > > following
> > > build error [1] with this patch.
> > >
> > >
> > OK. But after the patch I have just sent for removing NFP PMD from 32
> bits
> > builds, nothing is really needed then. Right?
> >
> > If so, should I send the patch again about the CPP bridge or you can redo
> > it?
>
> I will re-apply it on master.
>
>
>
Great. Thanks!

Patch

diff --git a/drivers/net/nfp/nfp_net.c b/drivers/net/nfp/nfp_net.c
index 05a44a2a9..a791e95e2 100644
--- a/drivers/net/nfp/nfp_net.c
+++ b/drivers/net/nfp/nfp_net.c
@@ -54,6 +54,7 @@ 
 #include <rte_string_fns.h>
 #include <rte_alarm.h>
 #include <rte_spinlock.h>
+#include <rte_service_component.h>
 
 #include "nfpcore/nfp_cpp.h"
 #include "nfpcore/nfp_nffw.h"
@@ -66,6 +67,14 @@ 
 #include "nfp_net_logs.h"
 #include "nfp_net_ctrl.h"
 
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <sys/un.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <sys/ioctl.h>
+#include <errno.h>
+
 /* Prototypes */
 static void nfp_net_close(struct rte_eth_dev *dev);
 static int nfp_net_configure(struct rte_eth_dev *dev);
@@ -2949,14 +2958,359 @@  nfp_net_init(struct rte_eth_dev *eth_dev)
 	return err;
 }
 
+#define NFP_CPP_MEMIO_BOUNDARY		(1 << 20)
+
+/*
+ * Serving a write request to NFP from host programs. The request
+ * sends the write size and the CPP target. The bridge makes use
+ * of CPP interface handler configured by the PMD setup.
+ */
+static int
+nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)
+{
+	struct nfp_cpp_area *area;
+	off_t offset, nfp_offset;
+	uint32_t cpp_id, pos, len;
+	uint32_t tmpbuf[16];
+	size_t count, curlen, totlen = 0;
+	int err = 0;
+
+	PMD_CPP_LOG(DEBUG, "%s: offset size %lu, count_size: %lu\n", __func__,
+		sizeof(off_t), sizeof(size_t));
+
+	/* Reading the count param */
+	err = recv(sockfd, &count, sizeof(off_t), 0);
+	if (err != sizeof(off_t))
+		return -EINVAL;
+
+	curlen = count;
+
+	/* Reading the offset param */
+	err = recv(sockfd, &offset, sizeof(off_t), 0);
+	if (err != sizeof(off_t))
+		return -EINVAL;
+
+	/* Obtain target's CPP ID and offset in target */
+	cpp_id = (offset >> 40) << 8;
+	nfp_offset = offset & ((1ull << 40) - 1);
+
+	PMD_CPP_LOG(DEBUG, "%s: count %lu and offset %ld\n", __func__, count,
+		offset);
+	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %ld\n", __func__,
+		cpp_id, nfp_offset);
+
+	/* Adjust length if not aligned */
+	if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) !=
+	    (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
+		curlen = NFP_CPP_MEMIO_BOUNDARY -
+			(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
+	}
+
+	while (count > 0) {
+		/* configure a CPP PCIe2CPP BAR for mapping the CPP target */
+		area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev",
+						    nfp_offset, curlen);
+		if (!area) {
+			RTE_LOG(ERR, PMD, "%s: area alloc fail\n", __func__);
+			return -EIO;
+		}
+
+		/* mapping the target */
+		err = nfp_cpp_area_acquire(area);
+		if (err < 0) {
+			RTE_LOG(ERR, PMD, "area acquire failed\n");
+			nfp_cpp_area_free(area);
+			return -EIO;
+		}
+
+		for (pos = 0; pos < curlen; pos += len) {
+			len = curlen - pos;
+			if (len > sizeof(tmpbuf))
+				len = sizeof(tmpbuf);
+
+			PMD_CPP_LOG(DEBUG, "%s: Receive %u of %lu\n", __func__,
+					   len, count);
+			err = recv(sockfd, tmpbuf, len, MSG_WAITALL);
+			if (err != (int)len) {
+				RTE_LOG(ERR, PMD,
+					"%s: error when receiving, %d of %lu\n",
+					__func__, err, count);
+				nfp_cpp_area_release(area);
+				nfp_cpp_area_free(area);
+				return -EIO;
+			}
+			err = nfp_cpp_area_write(area, pos, tmpbuf, len);
+			if (err < 0) {
+				RTE_LOG(ERR, PMD, "nfp_cpp_area_write error\n");
+				nfp_cpp_area_release(area);
+				nfp_cpp_area_free(area);
+				return -EIO;
+			}
+		}
+
+		nfp_offset += pos;
+		totlen += pos;
+		nfp_cpp_area_release(area);
+		nfp_cpp_area_free(area);
+
+		count -= pos;
+		curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?
+			 NFP_CPP_MEMIO_BOUNDARY : count;
+	}
+
+	return 0;
+}
+
+/*
+ * Serving a read request to NFP from host programs. The request
+ * sends the read size and the CPP target. The bridge makes use
+ * of CPP interface handler configured by the PMD setup. The read
+ * data is sent to the requester using the same socket.
+ */
+static int
+nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)
+{
+	struct nfp_cpp_area *area;
+	off_t offset, nfp_offset;
+	uint32_t cpp_id, pos, len;
+	uint32_t tmpbuf[16];
+	size_t count, curlen, totlen = 0;
+	int err = 0;
+
+	PMD_CPP_LOG(DEBUG, "%s: offset size %lu, count_size: %lu\n", __func__,
+		sizeof(off_t), sizeof(size_t));
+
+	/* Reading the count param */
+	err = recv(sockfd, &count, sizeof(off_t), 0);
+	if (err != sizeof(off_t))
+		return -EINVAL;
+
+	curlen = count;
+
+	/* Reading the offset param */
+	err = recv(sockfd, &offset, sizeof(off_t), 0);
+	if (err != sizeof(off_t))
+		return -EINVAL;
+
+	/* Obtain target's CPP ID and offset in target */
+	cpp_id = (offset >> 40) << 8;
+	nfp_offset = offset & ((1ull << 40) - 1);
+
+	PMD_CPP_LOG(DEBUG, "%s: count %lu and offset %ld\n", __func__, count,
+			   offset);
+	PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %ld\n", __func__,
+			   cpp_id, nfp_offset);
+
+	/* Adjust length if not aligned */
+	if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) !=
+	    (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {
+		curlen = NFP_CPP_MEMIO_BOUNDARY -
+			(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));
+	}
+
+	while (count > 0) {
+		area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev",
+						    nfp_offset, curlen);
+		if (!area) {
+			RTE_LOG(ERR, PMD, "%s: area alloc failed\n", __func__);
+			return -EIO;
+		}
+
+		err = nfp_cpp_area_acquire(area);
+		if (err < 0) {
+			RTE_LOG(ERR, PMD, "area acquire failed\n");
+			nfp_cpp_area_free(area);
+			return -EIO;
+		}
+
+		for (pos = 0; pos < curlen; pos += len) {
+			len = curlen - pos;
+			if (len > sizeof(tmpbuf))
+				len = sizeof(tmpbuf);
+
+			err = nfp_cpp_area_read(area, pos, tmpbuf, len);
+			if (err < 0) {
+				RTE_LOG(ERR, PMD, "nfp_cpp_area_read error\n");
+				nfp_cpp_area_release(area);
+				nfp_cpp_area_free(area);
+				return -EIO;
+			}
+			PMD_CPP_LOG(DEBUG, "%s: sending %u of %lu\n", __func__,
+					   len, count);
+
+			err = send(sockfd, tmpbuf, len, 0);
+			if (err != (int)len) {
+				RTE_LOG(ERR, PMD,
+					"%s: error when sending: %d of %lu\n",
+					__func__, err, count);
+				nfp_cpp_area_release(area);
+				nfp_cpp_area_free(area);
+				return -EIO;
+			}
+		}
+
+		nfp_offset += pos;
+		totlen += pos;
+		nfp_cpp_area_release(area);
+		nfp_cpp_area_free(area);
+
+		count -= pos;
+		curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?
+			NFP_CPP_MEMIO_BOUNDARY : count;
+	}
+	return 0;
+}
+
+#define NFP_IOCTL 'n'
+#define NFP_IOCTL_CPP_IDENTIFICATION _IOW(NFP_IOCTL, 0x8f, uint32_t)
+/*
+ * Serving a ioctl command from host NFP tools. This usually goes to
+ * a kernel driver char driver but it is not available when the PF is
+ * bound to the PMD. Currently just one ioctl command is served and it
+ * does not require any CPP access at all.
+ */
+static int
+nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp)
+{
+	uint32_t cmd, ident_size, tmp;
+	int err;
+
+	/* Reading now the IOCTL command */
+	err = recv(sockfd, &cmd, 4, 0);
+	if (err != 4) {
+		RTE_LOG(ERR, PMD, "%s: read error from socket\n", __func__);
+		return -EIO;
+	}
+
+	/* Only supporting NFP_IOCTL_CPP_IDENTIFICATION */
+	if (cmd != NFP_IOCTL_CPP_IDENTIFICATION) {
+		RTE_LOG(ERR, PMD, "%s: unknown cmd %d\n", __func__, cmd);
+		return -EINVAL;
+	}
+
+	err = recv(sockfd, &ident_size, 4, 0);
+	if (err != 4) {
+		RTE_LOG(ERR, PMD, "%s: read error from socket\n", __func__);
+		return -EIO;
+	}
+
+	tmp = nfp_cpp_model(cpp);
+
+	PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x\n", __func__, tmp);
+
+	err = send(sockfd, &tmp, 4, 0);
+	if (err != 4) {
+		RTE_LOG(ERR, PMD, "%s: error writing to socket\n", __func__);
+		return -EIO;
+	}
+
+	tmp = cpp->interface;
+
+	PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x\n", __func__, tmp);
+
+	err = send(sockfd, &tmp, 4, 0);
+	if (err != 4) {
+		RTE_LOG(ERR, PMD, "%s: error writing to socket\n", __func__);
+		return -EIO;
+	}
+
+	return 0;
+}
+
+#define NFP_BRIDGE_OP_READ	20
+#define NFP_BRIDGE_OP_WRITE	30
+#define NFP_BRIDGE_OP_IOCTL	40
+
+/*
+ * This is the code to be executed by a service core. The CPP bridge interface
+ * is based on a unix socket and requests usually received by a kernel char
+ * driver, read, write and ioctl, are handled by the CPP bridge. NFP host tools
+ * can be executed with a wrapper library and LD_LIBRARY being completely
+ * unaware of the CPP bridge performing the NFP kernel char driver for CPP
+ * accesses.
+ */
+static int32_t
+nfp_cpp_bridge_service_func(void *args)
+{
+	struct sockaddr address;
+	struct nfp_cpp *cpp = args;
+	int sockfd, datafd, op, ret;
+
+	unlink("/tmp/nfp_cpp");
+	sockfd = socket(AF_UNIX, SOCK_STREAM, 0);
+	if (sockfd < 0) {
+		RTE_LOG(ERR, PMD, "%s: socket creation error. Service failed\n",
+			__func__);
+		return -EIO;
+	}
+
+	memset(&address, 0, sizeof(struct sockaddr));
+
+	address.sa_family = AF_UNIX;
+	strcpy(address.sa_data, "/tmp/nfp_cpp");
+
+	ret = bind(sockfd, (const struct sockaddr *)&address,
+		   sizeof(struct sockaddr));
+	if (ret < 0) {
+		RTE_LOG(ERR, PMD, "%s: bind error (%d). Service failed\n",
+				  __func__, errno);
+		return ret;
+	}
+
+	ret = listen(sockfd, 20);
+	if (ret < 0) {
+		RTE_LOG(ERR, PMD, "%s: listen error(%d). Service failed\n",
+				  __func__, errno);
+		return ret;
+	}
+
+	for (;;) {
+		datafd = accept(sockfd, NULL, NULL);
+		if (datafd < 0) {
+			RTE_LOG(ERR, PMD, "%s: accept call error (%d)\n",
+					  __func__, errno);
+			RTE_LOG(ERR, PMD, "%s: service failed\n", __func__);
+			return -EIO;
+		}
+
+		while (1) {
+			ret = recv(datafd, &op, 4, 0);
+			if (ret <= 0) {
+				PMD_CPP_LOG(DEBUG, "%s: socket close\n",
+						   __func__);
+				break;
+			}
+
+			PMD_CPP_LOG(DEBUG, "%s: getting op %u\n", __func__, op);
+
+			if (op == NFP_BRIDGE_OP_READ)
+				nfp_cpp_bridge_serve_read(datafd, cpp);
+
+			if (op == NFP_BRIDGE_OP_WRITE)
+				nfp_cpp_bridge_serve_write(datafd, cpp);
+
+			if (op == NFP_BRIDGE_OP_IOCTL)
+				nfp_cpp_bridge_serve_ioctl(datafd, cpp);
+
+			if (op == 0)
+				break;
+		}
+		close(datafd);
+	}
+	close(sockfd);
+
+	return 0;
+}
+
 static int
 nfp_pf_create_dev(struct rte_pci_device *dev, int port, int ports,
 		  struct nfp_cpp *cpp, struct nfp_hwinfo *hwinfo,
 		  int phys_port, struct nfp_rtsym_table *sym_tbl, void **priv)
 {
 	struct rte_eth_dev *eth_dev;
-	struct nfp_net_hw *hw;
+	struct nfp_net_hw *hw = NULL;
 	char *port_name;
+	struct rte_service_spec service;
 	int retval;
 
 	port_name = rte_zmalloc("nfp_pf_port_name", 100, 0);
@@ -3027,6 +3381,29 @@  nfp_pf_create_dev(struct rte_pci_device *dev, int port, int ports,
 
 	rte_free(port_name);
 
+	if (port == 0) {
+		/*
+		 * The rte_service needs to be created just once per PMD.
+		 * And the cpp handler needs to be linked to the service.
+		 * Secondary processes will be used for debugging DPDK apps
+		 * when requiring to use the CPP interface for accessing NFP
+		 * components. And the cpp handler for secondary processes is
+		 * available at this point.
+		 */
+		memset(&service, 0, sizeof(struct rte_service_spec));
+		snprintf(service.name, sizeof(service.name), "nfp_cpp_service");
+		service.callback = nfp_cpp_bridge_service_func;
+		service.callback_userdata = (void *)cpp;
+
+		hw = (struct nfp_net_hw *)(eth_dev->data->dev_private);
+
+		if (rte_service_component_register(&service,
+						   &hw->nfp_cpp_service_id))
+			RTE_LOG(ERR, PMD, "NFP CPP bridge service register() failed");
+		else
+			RTE_LOG(DEBUG, PMD, "NFP CPP bridge service registered");
+	}
+
 	return retval;
 
 probe_failed:
diff --git a/drivers/net/nfp/nfp_net_logs.h b/drivers/net/nfp/nfp_net_logs.h
index 9952881cd..37690576a 100644
--- a/drivers/net/nfp/nfp_net_logs.h
+++ b/drivers/net/nfp/nfp_net_logs.h
@@ -56,6 +56,15 @@  extern int nfp_logtype_init;
 #define ASSERT(x) do { } while (0)
 #endif
 
+#define RTE_LIBRTE_NFP_NET_DEBUG_CPP
+
+#ifdef RTE_LIBRTE_NFP_NET_DEBUG_CPP
+#define PMD_CPP_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#else
+#define PMD_CPP_LOG(level, fmt, args...) do { } while (0)
+#endif
+
 extern int nfp_logtype_driver;
 #define PMD_DRV_LOG(level, fmt, args...) \
 	rte_log(RTE_LOG_ ## level, nfp_logtype_driver, \
diff --git a/drivers/net/nfp/nfp_net_pmd.h b/drivers/net/nfp/nfp_net_pmd.h
index b01036df2..7913a29c2 100644
--- a/drivers/net/nfp/nfp_net_pmd.h
+++ b/drivers/net/nfp/nfp_net_pmd.h
@@ -456,6 +456,7 @@  struct nfp_net_hw {
 
 	struct nfp_hwinfo *hwinfo;
 	struct nfp_rtsym_table *sym_tbl;
+	uint32_t nfp_cpp_service_id;
 };
 
 struct nfp_net_adapter {