Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

max_length of nlp pipline for e.g. japanese #13207

Open
JWittmeyer opened this issue Dec 21, 2023 · 4 comments
Open

max_length of nlp pipline for e.g. japanese #13207

JWittmeyer opened this issue Dec 21, 2023 · 4 comments
Labels
docs Documentation and website lang / ja Japanese language data and models

Comments

@JWittmeyer
Copy link

Not sure if this is meant to happen or a misunderstanding on my part. I'm assuming misunderstanding so I'm going for Documentation Report.

The Language (nlp) class has a max_length parameter that seems to work different for e.g. japanese.

I'm currently trying to chunk texts that are too long by considering the max_length and splitting based on that. For e.g. english texts this seems to work without any issues.

Basic approach code:

if len(content) > nlp.max_length:
    for chunk in __chunk_text(content, nlp.max_length-100):
        doc = nlp(chunk)
        #....    

However for the config string ja_core_news_sm this doesn't work.
After a bit of analyzation i noticed that not the length but the byte amount needs to be considered.

def __utf8len(s:str):
    return len(s.encode('utf-8'))

if __utf8len(content) > nlp.max_length:
    #...

However even with the byte approach i run into an error that looks like it's max_length related but maybe not really?

Slightly reduced Error trace:

    doc = nlp(content)
  File "/usr/local/lib/python3.9/site-packages/spacy/language.py", line 1014, in __call__
    doc = self._ensure_doc(text)
  File "/usr/local/lib/python3.9/site-packages/spacy/language.py", line 1105, in _ensure_doc
    return self.make_doc(doc_like)
  File "/usr/local/lib/python3.9/site-packages/spacy/language.py", line 1097, in make_doc
    return self.tokenizer(text)
  File "/usr/local/lib/python3.9/site-packages/spacy/lang/ja/__init__.py", line 56, in __call__
    sudachipy_tokens = self.tokenizer.tokenize(text)
Exception: Tokenization error: Input is too long, it can't be more than 49149 bytes, was 63960

I also double checked the values for max_length (1000000), string length (63876) & byte length(63960)
Setting the max_length by hand to 1100000 didn't change the error message so I'm assuming something else (maybe sudachi itself?) defines the Input is too long error message.

What the actual issue is and how to solve it (for lookup size limits) would be great for the documentation.

Which page or section is this issue related to?

Not sure where to add since it I'm not sure if it's directly japanese related. However a note might be interesting at https://spacy.io/models/ja or https://spacy.io/usage/models#japanese.

Further a note for max_length in general might need extension (if correctly assumed maybe something like character length isn't the classic python len(<string>) function but the byte size (e.g. letter "I" - len 1 - byte 1 & kanji "私" - len 1 - byte 3)

@adrianeboyd adrianeboyd added docs Documentation and website lang / ja Japanese language data and models labels Dec 21, 2023
@adrianeboyd
Copy link
Contributor

nlp.max_length is not a hard internal constraint, but rather a kind of clunky way to protect users from confusing OOM errors. It was set with the "core" pipelines and a not-especially-new consumer laptop in mind. If you're not actually running out of memory on your system, you can increase it with no worries, especially for simpler tasks like tokenization only.

On the other hand, none of the components in a core pipeline benefit from very long contexts (typically a section or a page or even a paragraph is sufficient), so splitting up texts is often the best way to go anyway. Very long texts can use a lot of RAM, especially for parser or ner.

This limit for Japanese is completely separate from nlp.max_length and is coming directly from sudachipy. (I actually hadn't encountered it before.)

Their error message seems fine (much better than an OOM message with a confusing traceback from the middle of the parser), so I don't know if it makes sense to us to add another check in the spacy Japanese tokenizer, which then might get out-of-sync with the upstream sudachipy constraints in the future.

But you're right that nlp.max_length isn't going to help directly for limiting the length in bytes, well, unless you set it much lower. But again, a lower limit would probably be fine in practice.

We'll look at adding this to the documentation!

@JWittmeyer
Copy link
Author

JWittmeyer commented Dec 21, 2023

Thanks for the explanation, that helped clearing the confusion on my end and i know how to proceed for my usecase.


In case anyone ever stumbles upon this, here the code i went with for byte splitting (though probably still has a lot of optimization potential)

# splits not after x bytes but ensures that max x bytes are used without destroying the final character 
def __chunk_text_on_bytes(text: str, max_chunk_size: int = 1_000_000):
    factor = len(text) / __utf8len(text)
    increase_by = int(max(min(max_chunk_size*.1,10),1))
    initial_size_guess = int(max(max_chunk_size * factor - 10,1))
    final_list = []
    remaining = text
    while len(remaining):
        part = remaining[:initial_size_guess]
        if __utf8len(part) > max_chunk_size:
            initial_size_guess = max(initial_size_guess - min(max_chunk_size *.001,10),1) 
            continue
        cut_after = initial_size_guess
        while __utf8len(part) < max_chunk_size and part != remaining:
            cut_after = min(len(remaining), cut_after+increase_by)
            part = remaining[:cut_after]
            
        if __utf8len(part) > max_chunk_size:
            cut_after-=increase_by
        final_list.append(remaining[:cut_after])
        remaining = remaining[cut_after:]

    return final_list

@starlabman
Copy link

Existing documentation

"""
...
max_length (int): The maximum allowed length of text for processing.
...
"""

Updated documentation

"""
...
max_length (int): The maximum allowed length of text for processing. The behavior of max_length may vary for different languages. Please refer to the language-specific documentation for more details.
...

@adrianeboyd
Copy link
Contributor

Thanks for the suggestion! I think that this description is slightly confusing for users, since nlp.max_length itself will behave the same way for all languages. What we need to highlight is that some individual tokenizers or components, especially those that wrap third-party libraries, may have their own internal length restrictions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docs Documentation and website lang / ja Japanese language data and models
Projects
None yet
Development

No branches or pull requests

3 participants