You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# what I expect is that both outputs:
['▁text', 'a', '<s>', '▁text', 'b']
['▁text', 'a', '<s>', '▁text', 'b']
# However, in reality, their outputs are as follows:
['▁text', 'a', '<s>', '▁text', 'b']
['▁text', 'a', '<', 's', '>', 'text', 'b']
Why these two tokenizers have different segmentation results for special words?