deep learning summit - SLIDEBLAST.COM

8 downloads 574 Views 479KB Size Report
Max Welling is a Professor of Computer Science at the University of Amsterdam ... He received a BSc(Hons), Pure Mathemat
DEEP LEARNING SUMMIT Join the smart artificial intelligence revolution.

September 24-25 | 2015 LONDON

WHO ARE WE? RE.WORK is the premier event for emerging technology and innovation. RE.WORK showcases and explores exponentially accelerating technology and its impact on business and society.

RE.WORK brings together the most influential technologists, entrepreneurs, academics and business leaders to collaborate and reshape the future. Gain insight into breakthrough technology innovations through the world’s leading innovators and decision-makers.

Emerging technology is providing an unprecedented era of opportunity to create a more sustainable, healthier, wealthier and more equal society.

Partnerships and new business opportunities will be created through a program of inspiring fireside chats, interactive panel sessions and keynote presentations from world-class speakers, as well as speed-networking, expert office hours, workshops, ‘What’s Next’ sessions and exhibition areas. How will disruptive technology impact your industry?

WHY ATTEND?

RE.WORK is different from the usual technology summit.

By 2050 there will be 9 billion people on the planet.

We’re focusing on global challenges & breakthrough technological innovations.

How are we going to provide food, healthcare and education for all?

At RE.WORK, the most influential innovators, leading technologists and disruptive entrepreneurs will come together to explore change and create the future.

What about energy supplies? Urban living? Transport? Safety? Increased inequality?

Stand still and you’ll be left behind. It’s time to learn about breakthrough future technology.

Technology is disrupting the world at a rapid pace. Get prepared.

WHO WILL YOU MEET? Entrepreneurs Data Scientists Technologists

Industry Leaders Data Engineers Big Data Experts

WHY SHOULD YOU ATTEND? Get ready to be inspired. The Deep Learning Innovation Summit is a unique opportunity to meet influential technologists, data scientists, world-leading strategists, entrepreneurs and data engineers all in the same room. Discover how to future-proof your business and prepare for the smart artificial intelligence world. • Discover advances in deep learning and smart AI from the world’s leading innovators • Understand how deep learning will impact your industry • Discover new business opportunities • Identify the latest technology trends and innovations • Interact with influential business executives, innovators and business leaders •

WHAT TOPICS WILL BE COVERED? • Deep Learning

• Speech Recognition

• Neural Networks

• Image Retrieval

• Applied Machine Learning • Pattern Recognition • Big Data • Deep Learning Algorithms

WHAT INDUSTRIES WILL BE AFFECTED? • Manufacturing

• Engineering

• Healthcare

• Computing

• Connectivity

• Security

• Medicine

• Social

• Communications

• Computing Systems

PRESENTATIONS 


MAX WELLING Professor of Computer Science University of Amsterdam Max Welling is a Professor of Computer Science at the University of Amsterdam and the University of California Irvine. In the past he held postdoctoral positions at Caltech (’98-’00), UCL (’00-’01) and the U. Toronto (’01-’03). He received his PhD in ’98 under supervision of Nobel laureate Prof. G. 't Hooft. Max Welling serves as associate editor in chief of IEEE TPAMI, one of the highest impact journals in AI (impact factor 4.8). He serves on the editorial boards of JMLR and JML and was an associate editor for Neurocomputing, JCGS and TPAMI. In 2009 he was conference chair for AISTATS, in 2013 he was be program chair for NIPS (the largest and most prestigious conference in machine learning), in 2014 he was general chair for NIPS and in 2016 he will be a program chair at ECCV. He received multiple grants from NSF, NIH, ONR, and NWO, among which an NSF career grant in 2005. He is recipient of the ECCV Koenderink Prize in 2010 and the best paper award at ICML 2012. Welling is currently the director of the master program in artificial intelligence at the UvA and he is a member of the advisory board of the newly opened Amsterdam Data Science Center in Amsterdam. He is also a member of the Neural Computation and Adaptive Perception Program at the Canadian Institute for Advanced Research. Welling’s research focuses on large-scale statistical learning. He has made contributions in Bayesian learning, approximate inference in graphical models, deep learning and visual object recognition. He has over 150 academic publications.

PAUL MURPHY CEO Clarify

Paul Murphy is one of Clarify's founders and its CEO. Paul's career in software operations industry has spanned twenty years and three continents. Ten years were dedicated to understanding and building large systems on Wall Street for clients like J.P. Morgan and Salomon Brothers. Paul's work in this area allowed him to explore a broad range of computing solutions, from mainframes to web services, and the gamut of space-time tradeoffs required by dissimilar front and back office systems. Thirteen years ago, Paul moved to London to work at Adeptra, a pioneer in the use of automated outbound calling in the area of credit card fraud detection and prevention. As Adeptra's CTO, he developed all of the software which enabled Adeptra to place intelligent interactive outbound calls on behalf of clients. These systems made extensive use of text-to-speech and voice recognition technology. Since then Paul has dedicated his time to developing technologies that leverage emerging voice processing techniques.



DR. BLAISE THOMSON CEO VocalIQ

Before co-founding VocalIQ, Blaise spent several years researching new approaches to building spoken dialogue systems; first as part of his Ph.D. and then as a Research Fellow at the University of Cambridge. Many of these new ideas are integrated into VocalIQ technology and have been awarded prizes within the research community. Specifically, Dr. Thomson has received multiple awards from the IEEE and the Journal of Computer Speech and Language for his groundbreaking research into natural language processing and machine learning algorithm design. He received a BSc(Hons), Pure Mathematics, 1st, from the University of Cape Town, South Africa in 2004. Outside of work, he enjoys playing guitar and dancing.

LIOR WOLF Research Scientist Google Image Annotation using Deep Learning and Fisher Vectors We present a system for solving the holy grail of computer vision -- matching images and text and describing an image by an automatically generated text. Our system is based on combining deep learning tools for images and text, namely Convolutional Neural Networks, word2vec, and Recurrent Neural Networks, together with a classical computer vision tool, the Fisher Vector. The Fisher Vector is modified to support hybrid distributions that are a better fit natural language processing. Our method proves to be extremely potent and we outperform by a significant margin all concurrent methods. Prof. Lior Wolf is a faculty member at the School of Computer Science at Tel-Aviv University. Previously, he was a post-doctoral associate in Prof. Poggio's lab at MIT. He graduated from the Hebrew University, Jerusalem, where he worked under the supervision of Prof. Shashua. Lior Wolf was awarded the 2008 Sackler Career Development Chair, the Colton Excellence Fellowship for new faculty (2006-2008), the Max Shlumiuk Award for 2004, and the Rothchild Fellowship for 2004. His joint work with Prof. Shashua in ECCV 2000 received the best paper award, and their work in ICCV 2001 received the Marr Prize honorable mention. He was also awarded the best paper award at the post ICCV 2009 workshop on eHeritage, and the pre-CVPR2013 workshop on action recognition. Prof. Wolf research focuses on computer vision and applications of machine learning and includes topics such as face identification, document analysis, digital paleography, and video action recognition.

BERNARDINO ROMERA PAREDES 


Postdoctoral Research Assistant University of Oxford

Deep Holistic Image Understanding Image understanding involves not only object recognition, but also object delineation. This shape recovery task is challenging because of two reasons. First, the necessity of learning a good representation of the visual inputs. Second, the need to account for contextual information across the image, such as edges and appearance consistency. Deep convolutional neural networks are successful at the former, but have limited capacity to delineate visual objects. I will present a framework that extends the capabilities of deep learning techniques to tackle this scenario, obtaining cutting edge results in semantic segmentation (i.e. detecting and delineating objects), and depth estimation. Bernardino is a post-doc in Torr Vision Group at University of Oxford. He received his PhD degree from University College London in 2014, supervised by Prof. Massimiliano Pontil and Dr. Nadia Berthouze. He has published in top-tier machine learning conferences such as NIPS, ICML and AISTATS, receiving several awards such as the Best Paper Runner-up Prize at ICML 2013, and the Best Paper Award at ACII 2013. During his PhD he interned at Microsoft Research, Redmond. His research focuses on multitask and transfer learning methods applied to computer vision tasks such as object recognition and segmentation, and emotion recognition.

SÉBASTIEN BRATIÈRES Speech Evangelist at dawin gmbh & PhD Researcher University of Cambridge Deep Learning for Speech Recognition Speech technology makes it ever faster from research conferences to the consumer market. Deep learning accelerated this trend in 2010-2013. This talk will depict advances in speech technology, mainly due to deep neural net models, but not only. We’ll go through architectures in use today (DNN acoustic models, but also CNNs and more recently long-short-term memories). I’ll draw the connection to business issues such as the need for privacy-preserving (eg embedded) technology, or opportunities for small teams who don’t command huge computing clusters and masses of data. Finally, I’ll give an outlook on future directions: end-to-end speech recognition, the integration of spoken language understanding. Sébastien Bratières has spent 15 years in the speech and language industry in different European ventures, starting from the EU branch of Tellme Networks (now Microsoft) to startups in speech recognition and virtual conversational agents. Today, Sébastien is engaged in a PhD in statistical machine learning with Zoubin Ghahramani at the University of Cambridge, UK, and consults for dawin gmbh, a German SME producing custom speech solutions for industry use. Sébastien graduated with master’s degrees from Ecole Centrale Paris, France, in engineering, and from the University of Cambridge in speech and language processing.

JÖRG BORNSCHEIN 


Global Scholar CIFAR

Combining Directed & Undirected Generative Models In this talk I will present a new method for training deep models for unsupervised and semi supervised learning. The models consist of two neural networks with multiple layers of stochastic latent units. The first network supports fast approximate inference given some observed data. The other network is trained to approximately model the observed data using higher-level concepts and causes. The learning method is based on a new bound for the log-likelihood and the trained models are automatically regularized to balance between the requirement of making the job for both these models as easy as possible. Jorg Bornschein is a Global Scholar with the Canadian Institute for Advanced Research (CIFAR) and postdoctoral researcher in Yoshua Bengio’s machine learning lab at the University of Montreal. He is currently concentrating on unsupervised and semisupervised learning using deep architectures. Before moving to Montreal Jorg obtained his PhD from the University of Frankfurt working on large scale bayesian inference for non-linear sparse coding with a focus on building maintainable and massive parallel implementations for HPC clusters. Jorg was also chair and one of the founders of the german hackerspace “Das Labor” which was awarded in 2005 by the federal government for promoting STEM programs to prospective students.

MIRIAM REDI Research Scientist Yahoo Labs The Subjective Eye of Machine Vision Vision algorithms have achieved impressive performances in visual recognition. Nevertheless, an image is worth a thousand words, and not all these words refer to visible properties such as objects and scenes. In this talk we will explore the subjective side of visual data, investigating how machine learning can detect intangible properties of images and videos, such as beauty, creativity, and more curious characteristics. We will see the impact of such detectors in the context of web and social media. And we will analyze the precious contribution of computer vision in understanding how people and cultures perceive visual properties, underlining the importance of feature interpretability for this task. Miriam Redi is a Research Scientist at Yahoo Labs London. Her research focuses on content-based social multimedia analysis, with publications in top-tier conferences such as ACM MM, CVPR, ICWSM (best paper award). In particular, she explores ways to automatically assess visual aesthetics and creativity, and exploit the power of computer vision in the context of web, social media, and culture understanding. Miriam got her Ph.D. at the Multimedia group in EURECOM, Sophia Antipolis. After obtaining her PhD, she was a Postdoc in the Social Media group at Yahoo Labs Barcelona. Since then, she maintains collaborations with main academic research groups from both the multimedia and the social media communities.

MATTHEW ZEILER 


Founder & CEO Clarifai Inc

Leveraging Multiple Dimensions It is well understood that automation is needed to cope with the exponential growth in data being generated. At Clarifai, we’ve built a flexible deep learning infrastructure based on state of the art image classification. Our technology continues to evolve and tackle many new problems by leveraging different data sources and novel algorithms. This presentation will discuss some of the recent performance improvements of the system and how it can be leveraged in a variety of real world applications to improve how industry and consumers alike manage their data. Clarifai was founded by Matt Zeiler, an U of Toronto and NYU alumnus who worked with several pioneers in neural networks, and Adam Berenzweig, who left Google after 10+ years where he worked on Goggles and visual search. Matthew Zeiler, PhD, Founder and CEO of Clarifai Inc. studied machine learning and image recognition with several pioneers in the field of deep learning at University of Toronto and New York University. His insights into neural networks produced the top 5 results in the 2013 ImageNet classification competition. He founded Clarifai to push the limits of practical machine learning, which will power the next generation of intelligent applications and devices.

KORAY KAVUKCUOGLU Research Scientist Google DeepMind EndtoEnd Learning of Agents Reinforcement learning agents have achieved some successes in a variety of domains, however their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, lowdimensional state spaces. In this talk I will explain a novel algorithm (Deep QNetwork) that combines deep learning and reinforcement learning to enable agents to derive efficient representations of the environment from highdimensional sensory inputs, and use these to generalize past experience to new situations. The Deep QNetwork (DQN) algorithm achieves human level performance on ATARI 2600 domain operating directly on raw images and game scores. Koray Kavukcuoglu, PhD, Principal Researcher. Trained and worked as an aerospace engineer, before doing a machine learning PhD at NYU with Yann LeCun. Whilst there, he co-wrote the Torch platform, one of the most heavily used machine learning libraries in the world. Following his PhD, Koray was a Senior Researcher at Princeton/NEC labs, where he worked on applying cutting edge ML techniques.

ALEX MATEI 


mHealth Manager Bupa

Deep Learning for Digital Health Bupa Global Institute for Digital Health Excellence (GLIDHE) is a partnership between University College London and Bupa, aiming to reduce global demands on healthcare and improve quality of life. GLIDHE’s mission is to research, create, test and evaluate innovative, commercially sustainable digital tools which promote behaviour change and healthier lifestyles.Deep learning offers an important opportunity to deploy such tools at a global scale and opens up new avenues to engage consumers in their health choices. This presentation will discuss our experience to date with embedding deep learning systems in consumer applications. We are discussing the possibilities we have identified for using deep learning in order to improve end user experience and clinical effectiveness.We have assessed image classification and speech recognition services for use into health preventions initiatives. We are investigating ways to leverage our existing clinical and lifestyle content for digital coaching. Finally, we are using deep learning to personalise smoking cessation programs. Alex Matei is a PhD researcher in the Computer Science Department at UCL and mHealth Manager at Bupa. He is interested in personalised behaviour change programs and new ways of engaging consumers in their lifestyle choices.

SVEN BEHNKE Head of Computer Science Department University of Bonn From the Neural Abstraction Pyramid to Semantic RGB-D Perception The first part of the talk will focus on the Neural Abstraction Pyramid, a deep learning architecture, proposed by the speaker in 1998. For this architecture, layer-by-layer unsupervised learning creates increasingly abstract image representations. The hierarchical recurrent convolutional neural networks were trained in a supervised way to iteratively solve computer vision tasks such as superresolution, image denoising, and face localization. Key idea is the incorporation of contextual information for the iterative resolution of local ambiguities. The second part of the talk will focus on more recent work on deep learning for object-class segmentation of images and semantic RGB-D perception. Prof. Dr. Sven Behnke is a full professor for Computer Science at University of Bonn, Germany, where he heads the Autonomous Intelligent Systems group. He has been investigating deep learning since 1997. In 1998, he proposed the Neural Abstraction Pyramid, hierarchical recurrent convolutional neural networks for image interpretation. He developed unsupervised methods for layer-by-layer learning of increasingly abstract image representations. The architecture was also trained in a supervised way to iteratively solve computer vision tasks, such as superresolution, image denoising, and face localization. In recent years, his deep learning research focused on learning object-class segmentation of images and semantic RGB-D perception.

ALEX GRAVES 


Research Scientist Google DeepMind

Neural Turing Machines Neural Turing Machines extend the capabilities of neural networks by coupling them to an external memory matrix, which they can selectively interact with. The combined system embodies a kind of 'differentiable computer' which can be trained with gradient descent. This talk describes how neural Turing machines can learn basic computational algorithms such associative recall from input and output examples only. Alex Graves, PhD A world-renowned expert in Recurrent Neural Networks and Generative Models. Alex has done a BSc in Theoretical Physics at Edinburgh, Part III Maths at Cambridge, a PhD in AI at IDSIA. Followed by postdocs at TU-Munich and with Prof. Geoff Hinton at the University of Toronto. Most recently Alex has been spearheading our work on Neural Turing Machines.

NIC LANE Principle Scientist Bell Labs Squeezing Deep Learning onto Wearables & Phones Breakthroughs from the field of deep learning are radically changing how sensor data from cameras and microphones are interpreted, and the high-level information needed by mobile apps is extracted. The state-of-the-art in computational models for inferring faces, objects, activities, context are increasingly based on the principles and algorithms of deep learning. It is critical that the gains in inference accuracy and robustness that these models afford us become routinely embedded in the emerging sensor-based mobile apps used by consumers. Unfortunately, this is not happening today – even though mobile apps present some of the most challenging examples of noisy and complex sensor data we face – in far too many cases, smartphones and wearables use machine learning methods that have been superseded by deep learning years ago. In this talk, I will describe our recent work in developing general-purpose support for deep learning-based inference on resource-constrained mobile devices. Our goal is to radically lower the mobile resources (such as energy and computation) consumed by these modeling techniques at inference time, removing the key bottleneck preventing the widespread use of these algorithms. The foundation of this research is in the rethinking of how inference algorithms operate under mobile conditions along with increasing the utilization of the complete range of computational units (e.g., DSPs, GPUs, CPUs) now present in devices like watches, glasses and phones. Ultimately in this work, we aim to completely change how mobile sensor data is processed – and in turn, what mobile apps are capable of – in the next generation of personal sensing devices.

EISO KANT 


Co-Founder & Managing Director Tyba

Using Neural Networks To Predict Developers' Chances to Get Hired We have combined the Github data set of over 11 million open-source projects, 3.5 million developers, with 700,000 developer’s entire work history. Using a neural network, we analyse the code that was written by each developer in relation to the companies they’ve worked at. At Tyba we match developers with jobs they could be a good fit for. Eiso Kant is the co-founder of Tyba, an online recruitment platform for finding exciting jobs around the world. We are connecting and matching the most suitable talent with job opportunities at the most exciting startups. He currently is Tyba’s Managing Director and member of the board. Eiso started his first internet venture at the age of 14, building an e-commerce site that sold more then 10,000 pieces of classic lithography works online. At age 17 Eiso went on to found Twollars, a social startup focused on raising money for charities via Twitter. Eiso is currently focused on building a great company together with an amazing team at Tyba.

ALISON LOWNDES Deep Learning Solutions Architect & Community Manager NVIDIA Deep Learnings Impact on Modern Life The 60 year old research field, within Artificial Intelligence, has recently exploded across both media and academia. Research breakthroughs are now filtering into almost every facet of human life; commercial and personal. What was apparently sci-fi – machines that can see, hear and understand the world around them – is fast becoming the norm, on a grand scale. We take a closer look at the reality of the perfect storm created by society’s big data and NVIDIA’s GPU computational power. Deep Learning Solutions Architect and Community Manager EMEA. Very recent mature graduate in Artificial Intelligence (University of Leeds), combining technical and theoretical computer science with a physics background. Completed a very thorough empirical study of deep learning, specifically with GPU technology, covering the entire history and technical aspects of GPGPU with underlying mathematics. 25+ years in international project management and entrepreneurship, Founder Trustee of a global volunteering network (in her spare time) and two decades spent within the internet arena, provide her a universal view of any problem.

BEN MEDLOCK 


Co-Founder & CTO Swiftkey

As co-founder and CTO of SwiftKey, Ben Medlock invented the intelligent keyboard for smartphones and tablets that has transformed typing on touchscreens. The company’s mission is to make it easy for everyone to create and communicate on mobile. SwiftKey is best known for its smart typing technology which learns from each user to accurately autocorrect and predict their most-likely next word, and features on more than 250 million devices to date. SwiftKey Keyboard for Android is used by millions around the world and recently went free on Google Play after two years as the global best-selling paid app. SwiftKey Keyboard for iPhone and iPad launched in September 2014, following the success of iOS note-taking app SwiftKey Note. SwiftKey has been named the No 1 hottest startup in London by Wired magazine, ranked top 5 in Fast Company’s list of the most innovative productivity companies in the world and has won a clutch of awards for its innovative products and workplace. Ben has a First Class degree in Computer Science from Durham University and a PhD in Natural Language and Information Processing from the University of Cambridge.

DAVID PLANS VP of Product BioBeats At BioBeats, we're working on projects with AXA, Microsoft and BUPA that help people be well, fight stress, and be more productive. In most of these projects, deep learning approaches are taken to train models that can classify, predict and illuminate behaviour from the person's body and actions. Most of our classifiers learn from smartphone sensors, but increasingly our algorithms ingest from wearable sensors such as the Microsoft Band, Apple Watch, and upcoming projects from Google and Samsung. Our approach to building machine-learning-driven applications learns from evidence-based psychosocial intervention practices in mental health, but embodies continuous cardiovascular, skin, and movementbased sensor data in order to arrive at profound but granular insight for the individual, and their care or employer circle. Dr David Plans is a member of the University of Surrey’s Center for Digital Economy and Center for Vision, Speech and Signal Processing, and is working towards machine learning solutions to foster human wellbeing. His primary research focus is adaptive media and affective modelling. Having worked on early mHealth projects in the NHS, he is now leading smartphone and wearable research projects at BUPA, AXA/PPP, and Microsoft Health with his startup, BioBeats, where they are helping actuarial and care provision teams think differently about preventative health.

EKATERINA VOLKOVA-VOLKMAR 


Researcher Bupa

Deep Learning for Digital Health Bupa Global Institute for Digital Health Excellence (GLIDHE) is a partnership between University College London and Bupa, aiming to reduce global demands on healthcare and improve quality of life. GLIDHE’s mission is to research, create, test and evaluate innovative, commercially sustainable digital tools which promote behaviour change and healthier lifestyles.Deep learning offers an important opportunity to deploy such tools at a global scale and opens up new avenues to engage consumers in their health choices. This presentation will discuss our experience to date with embedding deep learning systems in consumer applications. We are discussing the possibilities we have identified for using deep learning in order to improve end user experience and clinical effectiveness.We have assessed image classification and speech recognition services for use into health preventions initiatives. We are investigating ways to leverage our existing clinical and lifestyle content for digital coaching. Finally, we are using deep learning to personalise smoking cessation programs. Ekaterina Volkova-Volkmar is a researcher at Bupa, London, UK. She finished her PhD at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany, in 2014. With research background in neuroscience, computer science, and computational linguistics, Ekaterina is interested in integrating deep learning methods into digital solutions for behaviour change. Her current focus is on developing intelligent digital coaching services to help people improve their lifestyles and prevent diseases. More broadly, her research aims to bring human-computer interaction to a new level of naturalness and utility by using adaptable and context-aware approaches to the analysis of human behaviour.

MARIE-FRANCINE MOENS 


Professor KU Leuven

Learning Representations for Language Understanding: Experiences from the MUSE project In the MUSE (Machine Understanding for interactive StorytElling) project (FP7-FET) natural language text is automatically translated into states and actions that take place in a virtual world. We demonstrate how deep learning improves semantic role labeling (recognizing the "who", did "what", "where", "when" and “how" components in natural language sentences), and how it promotes the acquisition of world knowledge from textual and visual data for a true-to-nature rendering of the virtual world. The deep learning models regard probabilistic graphical models, recurrent neural networks, and configurations of convolutional neural networks, word embeddings and denoising autoencoders. We show promising results and give ideas for future research. Marie-Francine Moens is a professor at the department of Computer Science of KU Leuven, where she heads the Language Intelligence and Information Retrieval group (http://www.cs.kuleuven.be/groups/liir/). She is author of more than 280 international peer reviewed publications and of several books. She is involved in the organization or program committee (as program chair, area chair or reviewer) of major conferences on computational linguistics, information retrieval and machine learning. In 2011 and 2012 she was appointed as chair of the European Chapter of the Association for Computational Linguistics (EACL). She is the scientific manager of the EU COST action iV&L (The European Network on Integrating Vision and Language). She was appointed as Scottish Informatics and Computer Science Alliance (SICSA) Distinguished Visiting Fellow in 2014.

SANDER Research Scientist Google DeepMind Sander Dieleman is a research scientist at Google DeepMind and a PhD student in the Reservoir Lab at Ghent University in Belgium. The main focus of his PhD research is applying deep learning and feature learning techniques to music information retrieval (MIR) problems, such as audio-based music classification, automatic tagging and music recommendation.

JOHN OVERINGTON 


Director of Bioinformatics Stratified Medical

Artificial Intelligence in Drug Discovery In the course of a few years the pharmaceutical industry has transitioned from a golden era to one of low productivity and low innovation – this is despite huge investments in technology and data (genomics, high-throughput screening, etc). Transformational advances in artificial intelligence algorithms are now impacting on many areas of science and technology, but with little current impact on drug discovery and development. Recent proof of concept studies have shown the benefit of deep learning approaches in predicting the activity of potential drugs, for example. Our approach is to build an artificial intelligence platform leveraging very large-scale text and quantitative data covering published scientific literature, patents, and curated ‘framework’ background knowledge – and then applying deep learning to discover 1) novel drug targets, 2) specific starting points for drug optimisation, and 3) new uses for currently approved drugs. John studied Chemistry at Bath, graduating in 1987. He then studied for a PhD at Birkbeck College, on protein modelling, followed by a postdoc at ICRF (now CRUK). John then joined Pfizer, eventually leading a multidisciplinary group combining rational drug design, informatics and structural biology. In 2000 he moved to a start-up biotech company, Inpharmatica, where he developed the drug discovery database StARLite. In 2008 John moved to the EMBL-EBI, where the successor resource is known as ChEMBL. Most recently John joined Stratified Medical, where he continues his research as director of bioinformatics. In this role, John is involved in integrating deep learning and other AI approaches to drug target validation and drug optimisation

JEFFREY DE FAUW Data Scientist Ghent University Jeffrey De Fauw studied pure mathematics at Ghent University before becoming more interested in machine learning problems through Kaggle competitions. Soon after he was introduced to (convolutional) neural networks and has since spent most of his time working with them. Besides always looking for challenging problems to work on, he has also become very interested in trying to find more algebraic structure in methods of representation learning.

ANDREW SIMPSON 


Research Fellow University of Surrey

Andrew Simpson holds a PhD in Human Auditory Perception and is a former games industry software engineer. He is a Research Fellow in the Centre for Vision, Speech and Signal Processing at the University of Surrey and also holds the position of Honorary Research Associate at the Ear Institute, University College London. His main interests are artificial neural networks and signal processing for speech and music. Dr Simpson has published 10 papers on Deep Learning since January this year.

JURIS PUCE CTO Kleintech The Challenges of Human Labour Automatization with Deep Learning in the Transport Industry The key idea behind deep learning is to automate human labour, reduce the costs and reduce the time and increase precision in which a task can be done. In the transport industry things like cargo number recognition and counting of objects were first to be automated, and are now improved to a very high precision. However, there are many other tasks in the industry that can be automated, but computers are currently lacking the precision to guarantee the appliance with the industry security standards. This presentation will discuss how we have overcome some of the challenges and give an insight of the upcoming applications and their effects on industry. Juris Pūce an adventurous entrepreneur, always looking for new challenges and business to build. Interested in all things technologically innovative and somewhat unknown, hence most of his companies are IT related. With over 15 years of experience in technology related business management, Juris Pūce currently divides his work between being a visionary for various start-ups as well as being the CTO of KleinTech, a company that specialises in complex machine vision and deep learning technology solutions for transport and security industries.

TONY ROBINSON 


Founder & CTO Speechmatics

Dr Tony Robinson obtained his PhD from Cambridge University Engineering Department in 1989. For the next decade he led the connectionist speech recognition research group in the university. He started his first company in 1995 and has founded or been involved with a large number of start-ups in the last two decades - including SpinVox, Softsound and Autonomy - mostly in the area of speech recognition and machine learning. He is pleased that the techniques he pioneered in the 1990s are now in vogue. His passion is the application of machine learning algorithms to tasks that traditionally had been considered impossible for computers to solve.

WALLY TRENHOLM Founder & CEO Sighline Innovation The Commercialisation of Deep Learning The world has not seen a more disruptive and powerful technology since the inception of the internet itself. Deep Learning is going to transform every single industry that it touches. In manufacturing industries, the application of deep learning is as powerful as the introduction of robotics with the ability to automate higher level human decision making. Tasks such as quality inspection, process monitoring and production analysis are areas that rely heavily on humans but continue to be plagued with problems. Similarly in medical diagnostics, the infrastructure around early detection and lab testing is ripe for transformation. This presentation will discuss how Sightline is applying the power of deep learning directly to these industries and affecting change to solve real problems with their Deep Learning cloud engine - Sightline Cortex. Wally is a technology visionary and serial entrepreneur who sold his previous company to Research In Motion. He has over 25 years of programming and 18 years of management experience. With Sightline Innovation Wally has connected complex science and business with the goal of creating a leading global technology company around Deep Learning. As the Founder and CEO of Sightline Innovation, he has built a company focused on practical deep learning solutions for medical diagnostics and manufacturing. In a few short years, Sightline Innovation has already been successful at selling and deploying its deep learning products in commercial settings, and built a powerful technology force to support it.

JASON CASSIDY 


MD, Chief Science Officer Sightline Innovation

Jason is an MD who left medical practise to drive the scientific effort at Sightline and the application of machine learning to microbiology. He was the driving force behind the adaptation of Sightline’s manufacturing products for nano-sensing and biosecurity, and his background as a physician is also helping shape the future applications in medical diagnostics.

MARIUS COBZARENCO Co-Founder & CTO re:infer Building Conversational Interfaces with Deep Nets Building a general purpose conversational agent is extremely difficult. Alan Turing famously proposed human computer conversation as a means to measure and asses machine intelligence. Recent advances in applying deep learning to problems in natural language understanding show great promise in this domain. In this talk we'll look briefly at the history of the problem and at how deep neural networks can be used to answer questions, guess intent and create new interfaces to apps and devices. I believe artificial intelligence will improve most aspects of our lives in the next decade. AI is already "eating the world" today. In particular, I am interested in how emerging technologies such as deep learning can be used to build frictionless natural language interfaces. To this end I co-founded re:infer I'm an old-fashioned hacker with strong understanding of machine learning and its proxy fields such as probability theory, statistical modelling, linear algebra and multivariate calculus. Academically, my interests lie at the interface of graphical models, deep learning and deterministic approximate inference.

CEES SNOEK 


Director QUVA

Video Understanding: What to Expect Today and Tomorrow? In this talk I will give an overview of recent advances in video understanding. For humans, understanding and interpreting the video signal that enters the brain is an amazingly complex task. Approximately half the brain is engaged in assigning a meaning to the incoming imagery, starting with the categorization of all visual concepts in the scene, like an airplane or a cat face. Thanks to yearly concept detection competitions, vast amounts of training data, and several artificial intelligence breakthroughs, categorization of video at the concept level has now matured from an academic challenge to a commercial enterprise. As a natural response, the academic community shifts the attention to more precise video understanding in the forms of localized actions, like phoning and sumo wrestling, as well as translating videos into single sentence summaries such as ‘a person changing a vehicle tire’ and ‘a man working on a metal crafts project’. We present recent results in these exciting new directions and showcase real-world retrieval with the state-of-the-art MediaMill video search engine, even for recognition scenarios where training examples are absent. Cees Snoek is a director of QUVA, the joint research lab of the University of Amsterdam and Qualcomm on deep learning and computer vision. He is also a principal engineer at Qualcomm and an associate professor at the University of Amsterdam. He was previously visiting scientist at Carnegie Mellon University, Fulbright scholar at UC Berkeley and head of R&D at Euvision Technologies (acquired by Qualcomm). His research interests focus on video and image recognition. Dr. Snoek is recipient of several career awards, including the Netherlands Prize for ICT Research. Cees is general chair of ACM Multimedia 2016 in Amsterdam.

MORE SPEAKERS COMING SOON

SUGGEST A SPEAKER

REGISTRATION

£395

£495

£695

Super Early Bird Pass

Early Bird Pass

Standard Pass

(until 1 May)

(until 31July)

£200

Startup/Academic Pass

TEAM DISCOUNT Send your team and we’ll give you a discount! 20% off for teams of 3+ and 30% off for teams of 5+. Email [email protected] to make the booking.

Still have some questions? Contact us to say hello and we’ll make it our mission to answer them.

www.re-work.co

[email protected]

+44 203 287 0590

PARTNERS 


SPONSORS NVIDIA awakened the world to computer graphics when it invented the GPU in 1999. Industry and academia are using GPUs for machine learning to make groundbreaking improvements across a variety of applications including image classification, video analytics and speech recognition. GPUs perform many calculations at once, speeding up processes that could otherwise take a year or more to just weeks or days. www.nvidia.co.uk

Vocal IQ plc (Cambridge, England) is a startup focusing on the twin challenges of machine learning integrated with conversational voice technology. To that end, Vocal IQ has created the world’s first self-learning dialogue platform. The company’s technology enables natural human/ machine interaction for the first time on mobile, desktop, automotive and IoT devices. A spinoff from Cambridge University, Vocal IQ provides a robust, scalable and Cloud-based architecture that is extremely simple for developers to integrate into their solutions.

Qualcomm Incorporated is the world leader in 3G, 4G and next-generation wireless technologies. Qualcomm innovations are enabling ultra-personal mobile devices; shaping next-generation mobile experiences; and inspiring transformative new business models and services. Qualcomm is transforming the way people live, learn, work and play. Qualcomm is included in the S&P 500 Index and is a FORTUNE 500® company traded on the NASDAQ Stock Market® under the symbol QCOM. Qualcomm R&D, a division of Qualcomm Technologies, Inc. is where many of the industry’s most talented engineers and scientists come to create the wireless technologies that will transform the future of wireless.

Tyba is a tech-driven professional network that connects and matches companies (primarily startups) with the most suitable talent available. Our collective passion for technology has given birth to two exciting products, The Company Pages and the Source{d} project. Tyba Company Pages have been designed by our developers to provide clients with unique and interactive company pages that allows them to showcase their features through team interviews, photos as well as insights on their company perks, values and interview tips. Source{d}, our tech-specific product, was recently built by our own developers, for developers. Our set of in-house algorithms is programmed to analyze code contribution from open source projects, identifying and matching tech candidates for specific positions.

PARTNERS 


GLOBAL PARTNER

The world has moved beyond text. Since the invention of the Gutenberg press human knowledge has been recorded and communicated in text. Today, audio and video recording technologies enable the easy capture and storage of information in quantities that are mind-boggling. So we now have a problem, a chasm that needs to be crossed. On one side we have an abundant, growing mountain of media files; on the other, developers who need to get to those files and content. To cross that chasm, video and audio must become as easy to manipulate and search as web pages or Word documents. Clarify is building that bridge. Our platform makes media content extraction and search easy for developers to integrate into their applications. Our self-service API allows them to finally make this data actionable.

STARTUP SHOWCASE Speech is the main form of human interaction. The problem is that information contained in audio is currently difficult to search and analyse. This presents a critical challenge for companies and applications such as call centres, conferences, education providers, mobile device manufacturers and many others. Transcription currently suffers from high costs (human), low accuracy (machine) and slow turn-around times (humans & most machines). Speechmatics is a leader in automatic speech recognition and provides services and applications across multiple areas including voice analytics, transcription services, language assessment and embedded OEM solutions. In short, Speechmatics makes speech analysable and discoverable.

Sightline Innovation is focused on applying machine learning to practical applications to solve real world problems through our unique technology platform, Sightline Cortex. Wielding the power of deep learning to condense the chaos of vast data collections into focused results, Sightline Cortex transforms data into knowledge, comprehension, and solutions. It is the only system of its kind serving the immediate needs of industry, with targeted strategies for manufacturing, automation, healthcare, and diagnostics. Our ability to perceive and take advantage of hitherto invisible patterns and knowledge will reshape how industries develop technologies and processes, solving their problems before they happen.

PARTNERS 


SPONSOR

Stratified Medical unites traditional pharmaceutical development methodology with powerful predictive analytical capabilities to reduce the friction inhibiting innovation and create a more efficient development process.

MORE PARTNERS COMING SOON

CONTACT US FOR MORE INFORMATION

UPCOMING EVENTS?

Sept 2015 London

Deep Learning

Sept 2015 London

Future Technology

Nov 2015 San Fran

Connected Devices

Jan 2016 San Fran

Deep Learning

Jan 2016 San Fran

Virtual Personal Assistants