Discussion about this post

User's avatar
John Saunders's avatar

Wow. I remember when minibatch/batch normalization/gradient accumulation was offered as a performance improvement to lessen the number of weight updates in backpropagation. Carried forward because that's how it's always been done.

Now we await differences in model performance after the Transformers change.

Expand full comment
Remixa's avatar

Glad to see it fixed!

Expand full comment

No posts