Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

interface result parameter set for dpdk device #189

Open
pperiyasamy opened this issue Jul 14, 2021 · 3 comments
Open

interface result parameter set for dpdk device #189

pperiyasamy opened this issue Jul 14, 2021 · 3 comments

Comments

@pperiyasamy
Copy link

What happened?

The k8s.v1.cni.cncf.io/networks-status pod annotation contains interface parameter set to net1 for dpdk device.

Annotations:  cni.projectcalico.org/podIP: 192.168.167.130/32
              cni.projectcalico.org/podIPs: 192.168.167.130/32
              k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "",
                    "ips": [
                        "192.168.167.130"
                    ],
                    "default": true,
                    "dns": {}
                },{
                    "name": "default/sriov-cni-dpdk-net",
                    "interface": "net1",
                    "ips": [
                        "11.56.217.2"
                    ],
                    "dns": {},
                    "device-info": {
                        "type": "pci",
                        "version": "1.0.0",
                        "pci": {
                            "pci-address": "0000:08:06.3"
                        }
                    }
                }]

What did you expect to happen?

The interface parameter must not get displayed for dpdk device, This is because result.Interfaces is set here even for dpdk case.

What are the minimal steps needed to reproduce the bug?

create a pod with secondary network attachment for dpdk device.

Anything else we need to know?

currently even with k8s.v1.cni.cncf.io/network-status annotation downward API mount inside the pod container, there is no direct way to identify given pci address is net or dpdk device.
should PciDevice in DeviceInfo contain bound driver parameter to figure out type of device inside the pod container ?

Component Versions

Please fill in the below table with the version numbers of applicable components used.

Component Version
SR-IOV CNI Plugin
Multus
SR-IOV Network Device Plugin
Kubernetes
OS

Config Files

Config file locations may be config dependent.

CNI config (Try '/etc/cni/net.d/')
Device pool config file location (Try '/etc/pcidp/config.json')
Multus config (Try '/etc/cni/multus/net.d')
Kubernetes deployment type ( Bare Metal, Kubeadm etc.)
Kubeconfig file
SR-IOV Network Custom Resource Definition

Logs

SR-IOV Network Device Plugin Logs (use kubectl logs $PODNAME)
Multus logs (If enabled. Try '/var/log/multus.log' )
Kubelet logs (journalctl -u kubelet)
@pperiyasamy
Copy link
Author

/cc @JanScheurich

@adrianchiris
Copy link
Contributor

Are there implications of keeping the interface name in results empty ? is that OK from CNI spec POV?

One thing i can think of is that we rely on it (to be provided in CmdAdd / CmdDel) to cache the state of the VF.
from CNI spec:

The operation of a network configuration on a container is called an attachment. An attachment may be uniquely identified by the (CNI_CONTAINERID, CNI_IFNAME) tuple.

@zshi-redhat
Copy link
Collaborator

Discussed in Monday's resource mgmt meeting:

Interface name is used as reference when caching the CNI config in sriov-cni, this is true also for dpdk interface. To remove it, there are several considerations:

  1. sriov-cni may need to change to use pci ID or other unique identifier when caching cni config
  2. interface name is passed down by Multus and not used to configure the dpdk interface name, we need to understand the implication of not returning the name back to Multus (e.g. does it affect cmdDel?)

An alternative is to add another field in device-info spec (e.g. driver) to indicate whether interface is using vfio-pci or kernel driver, so that application knows how to make use of the device. In such case, interface in network-status is just a index name for vfio-pci device.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants