San Francisco, CA, January 19, 2015 --(PR.com
)-- Artificially intelligent computers are now using neural networks to analyse huge amounts of data with an unprecedented level of success.
The Deep Learning Summit is a unique opportunity to meet influential data scientists, technologists world-leading researchers, entrepreneurs and data engineers all in the same room. Discover how advanced deep learning will impact your business and prepare for the smart artificial intelligence world.
The summit will take place in San Francisco on 29-30 January, with 40+ experts discussing Neural Networks, Image Recognition, Language Processing, Advanced Deep Learning Algorithms, Artificial Intelligence, Machine Learning, Big Data and Computing Systems. Speakers will include exciting new startups, leading technologists and engineers, and world-class researchers from notable companies and institutions including Google, Loop AI, MIT, Flickr, Emotient, MetaMind, University of Toronto, Sentient, Metaio, Clarifai, Jibo and Stanford.
Sessions and topics at the Deep Learning Summit will include:
Natural Language Understanding
Many current language understanding algorithms rely on expert knowledge in the loop. Quoc Le is research scientist at Google Brain, where he works on large scale deep learning. His work has had an impact on breakthroughs in object recognition, speech recognition and language understanding.
At the Deep Learning Summit Quoc will discuss how deep learning can be used to understand texts without much prior knowledge. In particular, how algorithms can learn the vector representations of words, and use these vector representations to translate unknown words between languages. These vector representations preserve the semantics of sentences and documents and therefore can be used for machine translation, text classification, information retrieval and sentiment analysis.
Predictive Data for Education
The elusive quest to identify and place skilled professionals has become an obsession in the talent wars of the tech industry. Building cognitive models using unstructured data and ubiquitous sensors allows the assessment not only of concept mastery, but meta-learning development as well. Such models can then be used to predict which content will be an effective learning experience for a given learner.
Socos is a cutting-edge EdTech company which applies cognitive modelling to align education with real life outcomes. At the Deep Learning Summit, Vivienne Ming, co-Founder and Managing Partner of Socos, will discuss the concept of continuous passive predictive assessment, applied to both learners and professionals, from students to CEOs.
Machine Learning & Augmented Reality
Dr. Jürgen Sturm heads the machine learning efforts at Metaio GmbH, the world-leading Augmented Reality technology provider. He and his team research deep learning techniques such as random forests to track and augment the human body on camera images. The goal of Metaio’s machine learning efforts is to create immersive virtual shopping experiences, where customers can directly check out a new products without the need of the real physical item.
The promises of deep learning have permeated all branches of computer cognition including spoken language understanding. During the past decade we have witnessed tremendous progress, yet talking to machines is still perceived as a brittle technology. This is, more than ever, a big-data learning problem, and whoever has access to huge amounts of speech data can approach the challenge. This is the reason why, the industry - as opposed to academia - has led the past decade of successes in speech recognition and natural language understanding. Are Siri, Cortana and Google’s voice assistants pushing these technologies from a toy towards usable, useful, and pervasive applications? What problems still need to be solved?
Roberto Pieraccini, Director of Advanced Conversational Technologies at Jibo, is a world recognized expert in the fields of speech recognition, natural language, dialog, and human-machine multi-modal interaction. At the Deep Learning Summit, Roberto will lead a talk on spoken language technology, discussing how computers are learning to understand humans.
Facial Recognition & Expression Analysis
Previous state of the art approaches to facial expression recognition relied on handcrafted feature extraction and computer vision pipelines optimized for runtime speed and accuracy on relatively small datasets. Using specialized deep learning architectures trained on much larger datasets, Emotient have significantly improved accuracy over our previous academic and commercial efforts, even when both types of systems are trained on the same data. The potential for this technology is far-reaching, across fields of healthcare, education, advertising, and retail.
Dr. Joshua Susskind is co-Founder and Senior Data Scientist at Emotient, a start-up company focused on real-time perception of facial expressions from images and videos, where he develops algorithms and visualization techniques for understanding human behaviour. Marian Bartlett, co-Founder and Lead Scientist at Emotient, is a pioneer in the field of machine learning and computer vision for face analysis. She and her colleagues developed software that automatically detects facial expressions of the seven primary emotions, as well as individual facial muscle movements.
Tickets & Registration
For further information and to register, go to: re-work.co/deep-learning