-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Open
Description
If anyone runs this SentenceTransformerTrainer in Chapter 10 but stuck, you can add report_to=['none']
from sentence_transformers.training_args import SentenceTransformerTrainingArguments
# Define the training arguments
args = SentenceTransformerTrainingArguments(
output_dir="base_embedding_model",
num_train_epochs=1, # The number of training rounds. We keep this at 1 for faster training but it is generally advised to increase this value.
per_device_train_batch_size=64, # The number of samples to process simultaneously on each device (e.g., GPU or CPU) during evaluation. Higher values generally means faster training.
per_device_eval_batch_size=64, # The number of samples to process simultaneously on each device (e.g., GPU or CPU) during evaluation. Higher values generally means faster evaluation.
warmup_steps=100, # The number of steps during which the learning rate will be linearly increased from zero to the initial learning rate defined for the training process. Note that we did not specify a custom learning rate for this training process.
fp16=False, # By enabling this parameter we allow for mixed precision training, where computations are performed using 16-bit floating-point numbers (FP16) instead of the default 32-bit (FP32). This reduces memory usage and potentially increases the training speed.
eval_steps=100, # the number of training steps after which an evaluation is performed.
logging_steps=100,
report_to=["none"] # <= New added one.
)
Metadata
Metadata
Assignees
Labels
No labels