-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Closed
Labels
questionFurther information is requestedFurther information is requested
Description
Im using a SequenceTagger model but everytime I use word embeddings other than 'glove' im getting the following error:
RuntimeError: The expanded size of the tensor (2048) must match the existing size (2148) at non-singleton dimension 1. Target sizes: [9, 2048]. Tensor sizes: [9, 2148]
Ive tried this using stacked and non-stacked embeddings:
from flair.models import SequenceTagger
tagger: SequenceTagger = SequenceTagger(hidden_size=500,
embeddings=XLNetEmbeddings(),
tag_dictionary=tag,
tag_type='ner',
use_crf=True)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer: ModelTrainer = ModelTrainer(tagger, corpus_train)
# 7. start training
trainer.train(model_path,
learning_rate=0.1,
mini_batch_size=128,
max_epochs=1)
Ive also tried the suggested stacked embeddings as per the documents and have the same kind of error:
embedding_types: List[TokenEmbeddings] = [
WordEmbeddings('glove'),
# comment in this line to use character embeddings
CharacterEmbeddings(),
# comment in these lines to use flair embeddings
FlairEmbeddings('news-forward'),
FlairEmbeddings('news-backward'),
]
embeddings: StackedEmbeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger: SequenceTagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag,
tag_type='ner',
use_crf=True)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer: ModelTrainer = ModelTrainer(tagger, corpus_train)
# 7. start training
trainer.train(model_path,
learning_rate=0.1,
mini_batch_size=32,
max_epochs=1)
Is there
Metadata
Metadata
Assignees
Labels
questionFurther information is requestedFurther information is requested