Friday 10 February 2023

Transformer and its impact on Language Understanding in Machine Learning

‘Transformer’ a deep learning model was proposed in the paper ‘Attention is all you need’ which relied on a mechanism called attention, and ignored recurrence (which lot of its predecessors depended on) , to reach a new state of the art in translation quality, with significant more parallelization and much less training.


Due to its non-reliance on recurrence, the transformer does not require sequence data be processed in order, that allows transformers to allow parallelization and thus being able to train on much larger dataset on much lesser time. Transformer has enabled development of pre-trained models like BERT which has enabled much needed transfer learning. Transfer learning is a machine learning method where a model trained for a task is reused as starting point for another task. And as with BERT this approach delivers several advantages considering the vast compute and resource that is required to train the base model. BERT is a large model (12-layer to 24-24 layer transformer) that is trained on a large corpus like Wikipedia for a long time. It can then be fine-tuned to specific language tasks. Though Pre-training is expensive, fine-tuning is inexpensive. The other aspect of BERT is it can be adopted to many type pf NLP tasks. This enables the organizations to leverage large models build on transformers trained on massive datasets without spending significant resource on time, computation and effort and applying them on their own NLP tasks.
One of the challenge of using pre-trained transformer enabled models are they are trained and thus learns language representation of general English in case the language is English. Thus even strong base model might not always perform accurately when applied to domain specific context. Like – Credit could mean different thing when applied to general English or financial context. Significant fine tuning on specific tasks and customer use cases may go a long way on solving this.
Research is being done if similar transformer based architecture can be applied to video understanding. The thought process is if video can be interpreted as sequences of image patches extracted from individual frames, similar to how sentences are sequences of individual words in NLP.

No comments:

Post a Comment