top of page
באנר-לאתר-לעמוד-פרטנרים (1).jpg

June 26

Keynote

NLP-Architect by AI-Lab: An Open Source NLP Library for Developing NLP Solutions

Nowadays, data science teams build a DL model to solve specific NLP tasks. Data-hungry DL models must be fed with large amounts of annotated data, which is time-consuming and costly to acquire since its annotation requires domain expertise. At the same time, business environments are dynamic and complex, making it impractical to collect and label enough data for each scenario in a given domain within a practical time frame. Although much progress has been made in DL for NLP, the current paradigm holdbacks the adoption of NLP based DL/ML in commercial environments. A new set of approaches have emerged with effective transfer learning methods: Embeddings from Language Model (ELMo), Universal Language Model Fine-tuning (ULMFit), Transformer and recently Bidirectional Encoder Representations from Transformers (BERT). These approaches are demonstrating that pre-trained models originally trained on a specific task such as a language model (LM) task can be used successfully for other NLP tasks, outperforming state-of-the-art performance as well as gaining high accuracy with smaller amounts of data when compared to training from scratch In this talk, we present a few solutions and tools based transfer-learning, via NLP Architect library, targeted for a non-DL/NLP expert, that allows to scale and adapt models to new domains by learning from in-domain data with a small amount of labeled examples.

Location

Moshe Wasserblat

Moshe Wasserblat

Deep Learning and NLP Research Manager

Mr. Moshe Wasserblat is currently the Natural Language Processing and Deep Learning Research Group Manager for Intel’s Artificial Intelligence Products Group. In his former role, he has been with NICE Systems for more than 17 years, where he founded and led the Speech/Text Analytics Research team. His interests are in the field of speech processing and natural language processing. He was the co-founder coordinator of the EXCITEMENT FP7 ICT program and served as organizer and manager of several initiatives, including many Israeli Chief Scientist programs. He has filed more than 60 patents in the field of Language Technology and also has several publications in international conferences and journals.

INTEL

WANT TO EXPERIENCE OUR SPEAKERS LIVE?

About

INTEL

Copy of BRN19 Logos for company page- 45

Intel AI Lab, a team of machine learning and deep learning researchers and engineers, data scientists, and neuroscientists, has been focused on state-of-the art research and development in the field of artificial intelligence. Our core areas of research range from projects with a direct impact on upcoming and future generations of Intel technologies to novel algorithm development in areas such as natural language, speech, and vision. Together with our full stack and optimization expertise, we often apply the results of our research to help our customers solve their business problems. Through our unique team composition we are also exploring integrative approaches to AI spanning across traditional domain and task boundaries. The lab collaborates with academic research institutions and corporations to solve problems using AI. These collaborations include researchers at the University of California Berkeley, Brown University, CERN, and Bar Ilan University, and companies from industries such as finance, energy, healthcare, retail, manufacturing, automotive, government, and media & entertainment. We are committed to strong ethical principles and the use of AI for good

bottom of page