Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch over error types to use thiserror #124

Open
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

jmeggitt
Copy link
Contributor

This pull request converts reviews the error types being used and converts them over to use the thiserror crate. I would like to note though, that the conversion involved many subjective decisions.

Changes

  • Remove OneIoError and FilterError variants from ParserError. My reasoning for this was that these errors can only occur when BgpkitParser is being initialized and can instead be deferred to the caller at that time.
  • Change add_filter from accepting strings which require parsing to accepting Filters. For variants which accepted a wider range of string values (such as TsStart and TsEnd), I added constructors to Filter which parsed a given variant from a string.
  • Update RecordIterator and ElemIterator so errors now get propagated to the caller. They iterate over Result<MrtRecord, ParserError> and Result<BgpElem, ParserError> respectively.
  • Errors are no longer ignored. Some errors were logged, but then ignored. This made it difficult to determine if all records were actually being parsed and received correctly (Especially in the case of the iterators). If an error causes an iterator to exit prematurely, then that error will be returned to the user before returning None on the next iteration.
  • Remove ParserErrorWithBytes. My reasoning for this is that if the user wants the bytes related to the error, then they can call parse_common_header and parse_mrt_body themselves. This should give them greater flexibility over the data while removing the overhead involved with ParserErrorWithBytes.
  • Add a new integration test which simply reads a sample rib dump and update dump to verify that no errors occurred during parsing.
  • Switch from using Buf trait (get_u8, get_u16, etc.) and Bytes functions (advance and split_to) directly to using ReadUtils. The motivation for this is that the Buf and Bytes functions panic upon running out of data. While not strictly necessary due to extra precautions taken while parsing, this switch should protect against unforeseen panics occurring.
  • Remove BgpModelsError. It only had a single variant which wrapped another error so I decided to remove it for simplicity.

Notes

  • While I did convert RisliveError to use thiserror, I did not review it with nearly as much scrutiny as ParserError. Much of the error handling involves parsing messages and most of that should really be implemented as part of the serde deserialize. This would remove the need for many of the error variants, but would take a bit of work to implement.
  • I left a number of TODOs in the code in places where I was unsure how to proceed. This largely includes warnings which could potentially be converted to hard errors.
  • I need to finish adding documentation to the ParserError variants.

@jmeggitt
Copy link
Contributor Author

Blocked by #123

Copy link
Member

@digizeph digizeph left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few comments. Overall, it looks great!

Comment on lines +187 to +188
// TODO: Should it be treated as a raw attribute instead?
_ => Err(ParserError::UnsupportedAttributeType(attr_type)),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unsupported is fine in my opinion. We can add raw attribute support later.

Copy link
Contributor Author

@jmeggitt jmeggitt Sep 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you already implemented raw attribute support. It appears to only execute this line if it finds an attribute type it recognizes (has an enum variant), but does not get parsed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about?

Suggested change
// TODO: Should it be treated as a raw attribute instead?
_ => Err(ParserError::UnsupportedAttributeType(attr_type)),
_ => Ok(AttributeValue::Unknown(AttrRaw{ attr_type, bytes: Vec::from(attr_data) }))

We could probably introduce a new AttributeValue type Unsupported(AttrRaw), but I feel that's a bit unnecessary.

Comment on lines +197 to +199
// TODO: Is this correct? If we don't have enough bytes, split_to would panic.
// it's ok to have errors when reading partial bytes
warn!("PARTIAL: {}", e);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be fine. However, if I recall correctly, the previous version would not panic because it maintains the number of bytes it holds available to read. we should probably double-check to make sure we don't panic in this case.

Copy link
Contributor Author

@jmeggitt jmeggitt Sep 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are right, it does not panic. However, it propagates the error that there are not enough bytes remaining before it reaches this code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a good idea on this issue. My suggestion is to keep this note here and come back to this if we see panics later on with the PARTIAL warning.

}

// TODO: Why do we sometimes change our length estimate?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need more context for this question.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My confusion comes from this piece of code. Shouldn't the total size always greater or equal to the length of the message?

let bgp_msg_length = if (length as usize) > total_size {
    total_size - 19
} else {
    length as usize - 19
};

Could this be related to partial attributes being cutoff?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no guarantee that the number of bytes available matches the number of bytes encoded in MRT message.

length is read from the MRT message:

let length = data.read_u16()?;

It is completely possible that the value does not match the MRT message size we actually have. Therefore we have that check here to identify potential data issues. Here as we have already read all the bytes of the given MRT record into message, the length mismatch does not impact the parsing of the following records, and thus we made warnings instead of hard errors in the current version.


if data.remaining() != bgp_msg_length {
// TODO: Why is this not a hard error?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be a hard error. We should also make sure we can continue parsing other messages after this (it probably does that).

Copy link
Contributor Author

@jmeggitt jmeggitt Sep 28, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I change this to be an error, it fails parser::bmp::tests::test_route_monitoring (ParseError(InconsistentFieldLength { name: "BGP message length", expected: 109, found: 110 })). Do you have any ideas why this happens? All of the other tests passed, including the integration tests I added which parse through an entire update file and RIB dump to check for errors.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, I don't have much ideas on why this happens. The test was created by dumping some BMP raw messages from RouteViews BMP stream. This might indicate some potential issues with the BMP parsing that we previously were unaware of. I wonder if you could successfully run the real-time-routeviews-kafka-openbmp example with your updated code? That example will pull live data from RouteViews Kafka BMP stream.


// let pos_end = input.position() + opt_params_len as u64;
if input.remaining() != opt_params_len as usize {
// TODO: This seems like it should become a hard error
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. But it shouldn't stop the parsing of messages after this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could add the same error that you introduced previously (InconsistentFieldLength) here.

@@ -196,6 +189,7 @@ fn read_nlri(
return Ok(vec![]);
}
if length == 1 {
// TODO: Should this become a hard error?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. But it shouldn't stop the parsing of messages after this.


let peer_index = input.read_u16()?;
let originated_time = input.read_u32()?;
if add_path {
// TODO: Why is this value unused?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't exactly remember why it has path_id here, but the read_nlri_prefix function will read the path_id and override it anyway, so it is ignored here.


let microsecond_timestamp = match &entry_type {
EntryType::BGP4MP_ET => {
// TODO: Error if length < 4
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants