Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[nfacctd] Setting AS information from BGP no longer works in bleeding-edge #768

Open
SanderDelden opened this issue Mar 14, 2024 · 8 comments

Comments

@SanderDelden
Copy link

Description
In the latest commit (bleeding-edge) setting nfacctd_as to bgp will result in the ASN for all flows being set to 0. Changing the setting to netflow results in the correct AS data being printed again. The same thing goes for bgp_peer_src_as_type.

Setting both nfacctd_as and bgp_peer_src_as_type to bgp in 1.7.8 works without any issues.

Version
The bleeding-edge Docker tag was used: nfacctd 1.7.10-git (20240312-1 (2a62747))

Appreciation
Please consider starring this project to boost our reach on github!

If any additional information is required, please let me know.

@paololucente
Copy link
Member

paololucente commented Mar 15, 2024

Hi Sander ( @SanderDelden ),

I had a quick try at this and i seem unable to reproduce the issue. Is the config in Issue 769 valid also for this issue? Although i am sure innocent, can you post the content of the /etc/pmacct/mappings/bgp.map map? Also, can you look in the log if there is anything suspicious? Any warning / error message?

Paolo

@SanderDelden
Copy link
Author

Hi Paolo,

My apologies, should've included the configuration in my initial comment. I've stripped the configuration down to the bare minimum for testing purposes, here you go:

nfacctd.conf:

plugins: print[TEST]

bgp_daemon: true
bgp_daemon_port: 179
nfacctd_as: bgp
bgp_daemon_max_peers: 1
bgp_agent_map: /etc/pmacct/mappings/bgp.map
nfacctd_port: 5009

aggregate[TEST]: dst_as
print_output_file[TEST]: /tmp/pmacct/1m_TEST.json
print_output[TEST]: json
print_history[TEST]: 1m
print_history_roundoff[TEST]: m
print_refresh_time[TEST]: 60
print_output_file_append[TEST]: true

bgp.map:

bgp_ip=x.x.x.x  ip=0.0.0.0/0

All entries in 1m_TEST.json look as follows:

{"event_type": "purge", "as_dst": 0, "stamp_inserted": "2024-03-15 09:48:00", "stamp_updated": "2024-03-15 09:50:01", "packets": 101479, "bytes": 100798205}
{"event_type": "purge", "as_dst": 0, "stamp_inserted": "2024-03-15 09:49:00", "stamp_updated": "2024-03-15 09:50:01", "packets": 144579, "bytes": 143524105}
{"event_type": "purge", "as_dst": 0, "stamp_inserted": "2024-03-15 09:49:00", "stamp_updated": "2024-03-15 09:51:01", "packets": 99910, "bytes": 99144753}
{"event_type": "purge", "as_dst": 0, "stamp_inserted": "2024-03-15 09:50:00", "stamp_updated": "2024-03-15 09:51:01", "packets": 140996, "bytes": 139505374}
{"event_type": "purge", "as_dst": 0, "stamp_inserted": "2024-03-15 09:50:00", "stamp_updated": "2024-03-15 09:52:01", "packets": 102410, "bytes": 102121809}
{"event_type": "purge", "as_dst": 0, "stamp_inserted": "2024-03-15 09:51:00", "stamp_updated": "2024-03-15 09:52:01", "packets": 142988, "bytes": 141700933}

The configuration above works in 1.7.8 but the first purge of the cache has all the AS numbers listed as "0". I assume this has to do with the BGP session not being instantly established. This is no problem, just thought I'd mention it.

I've check the (debug) logging and nothing strange is observed.

@paololucente
Copy link
Member

Hi Sander ( @SanderDelden ),

I did manage to reproduce the scenario but unfortunately not the issue - both 1.7.8 and latest commit do work fine. Can you try to set nfacctd_net: bgp too and see if it makes any difference? Also, any ADD-PATH capability involved in the BGP feed?

Paolo

@SanderDelden
Copy link
Author

Hi Paolo,

Setting nfacctd_net: bgp unfortunately does not change the output. We are not using ADD-PATH. If it is of use to you I can provide a PCAP of the BGP traffic.

@paololucente
Copy link
Member

paololucente commented Mar 19, 2024

It would help, yes. A PCAP of both BGP and flows (maybe in two separate traces). Unfortunately BGP traffic can't be replayed so i could only inspect the traces, what would help much-much more (also having in mind #769) would be if i could access the container where flows and BGP are pointed to -- so to debug, recompile, troubleshoot, etc. both 1.7.8 and latest master code.

@SanderDelden
Copy link
Author

Hi Paolo,

Would a Teams session (or any other application of your preference) to debug this be possible?

@paololucente
Copy link
Member

This could work, yes. Can we switch to unicast email for the details?

@doup123
Copy link

doup123 commented Apr 23, 2024

Hello @paololucente, did you by any chance come to a conclusion on this?
I am facing something similar to what @SanderDelden mentioned.

I have configured pmacct to receive NetFlow v9 messages (including ingress and egress VRFID packet fields) from a Cisco router and have also established iBGP peering between them. The router sends both IPv4 and VPNv4 routes to pmacct which are correctly received.

I have also configured:

  • flow_to_rd_map: to associate interfaces with RDs
  • bgp_peer_src_as_map: to specify the src_as of specific interfaces
  • pre_tag_map: for enriching the flows with some selected data passed as labels (encoded as map)

Below you may find the corresponding config:

bgp_daemon: true
bgp_daemon_ip: 0.0.0.0
bgp_daemon_max_peers: 100
bgp_daemon_as: XXXXX
nfacctd_as: bgp
nfacctd_net: bgp


#bgp_table_dump_file: /var/log/pmacct/bgp-$peer_src_ip-%H%M.log
bgp_table_dump_refresh_time: 120
bgp_table_dump_kafka_broker_host: XXXXX
bgp_table_dump_kafka_topic: pmacct-bgp-dump

#https://github.com/pmacct/pmacct/blob/master/CONFIG-KEYS#L2833 #necessary for defining from where the src peering as should be added.
bgp_peer_src_as_type: map

nfacctd_port: 2055
! Set the plugin buffers and timeouts for performance tuning
aggregate: src_host, dst_host,peer_src_ip, peer_dst_ip, in_iface,timestamp_start, timestamp_end, src_as, dst_as, peer_src_as, peer_dst_as, label
plugins: kafka
plugin_buffer_size: 204800
plugin_pipe_size: 20480000
nfacctd_pipe_size: 20480000

! Configure the Kafka plugin
kafka_output: json
kafka_broker_host: XXXXX
kafka_topic: pmacct-enriched2
kafka_refresh_time: 60
kafka_history: 5m
kafka_history_roundoff: m

! MAPS DEFINITION
maps_entries: 2000000
!bgp_table_per_peer_buckets: 12
!aggregate_primitives: /etc/pmacct/primitives.lst
sampling_map: /etc/pmacct/sampling.map
pre_tag_map: pretag.map
pre_tag_label_encode_as_map: true
flow_to_rd_map: flow_to_rd.map
bgp_peer_src_as_map: peers.map
logfile: /var/log/pmacct1.log
daemonize: false

pmacct version

nfacctd -V
NetFlow Accounting Daemon, nfacctd 1.7.10-git [20240405-1 (6362a2c9)]

Arguments:
 'CFLAGS=-fcommon' '--enable-kafka' '--enable-jansson' '--enable-l2' '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' '--enable-st-bins'

Libs:
cdada 0.5.0
libpcap version 1.10.3 (with TPACKET_V3)
rdkafka 2.0.2
jansson 2.14

Plugins:
memory
print
nfprobe
sfprobe
tee
kafka

System:
Linux 5.4.0-155-generic #172-Ubuntu SMP Fri Jul 7 16:10:02 UTC 2023 x86_64

Compiler:
gcc 12.2.0

I have bumped though into a very strange problem:
The dst_as for flows that are related to VPNv4 routes is correctly identified and injected in the aggregated result, but the dst_as for flows that are related to IPv4 routes is set to 0.

The dst_as in the original NetFlow pcap is in both cases 0 (in the NetFlow packets), but only in the VPNv4 case pmacct substitutes its value.

Should not routes that do not correspond to any rd (i.e. IPv4 routes), to be used to enrich all flows not matching flow_to_rd_map criteria?

I am posting the way I have constructed the flow_to_rd_map:

id=0:AS:1234	ip=1.2.3.4 in=111
id=0:AS:1235	ip=1.2.3.4 in=112

Am I missing anything?
P.S.
The rest maps (pretag.map and bgp_peer_src_as_map) work as expected enriching appropriately the flows.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants