IEEE ISM 2016
San Jose, California, USA
December 11-13, 2016
A person’s lifestyle is the most controllable factor affecting her health. Advances in smart phones and wearable technology have made it now possible to analyze and understand an individual’s life style from passively collected objective data streams to build her model and predict important events in her life. Wearable/mobile sensors, smart homes, social networks, e-mail, calendar systems, and environmental sensors continuously generate data streams that can be used as lifestyle data. By assimilating and aggregating these multi-sensory data streams, we may create an accurate chronicle of a person’s life. By correlating life events with other events, and using a novel causality exploration framework, one can build model of the person. Such a model, that we call objective self, is the objective characterization of a person’s health, social life, and other aspects. We illustrate how to build an objective personal chronicle for a person. By building personicle for a long period and applying pattern recognition, it is possible to build a model of the person that could result in actionable insights and alerts in everyday life.
He is a Donald Bren Professor in Information & Computer Sciences at University of California, Irvine where he is doing research in Event Web and experiential computing. Earlier he served on faculty of Georgia Tech, University of California at San Diego, The university of Michigan, Ann Arbor, Wayne State University, and Indian Institute of Technology, Kharagpur. He is a Fellow of ACM, IEEE, AAAI, IAPR, and SPIE. His current research interests are in processing massive number of geo-spatial heterogeneous data streams for building Smart Social System. He is the recipient of several awards including the ACM SIGMM Technical Achievement Award 2010.
Ramesh co-founded several companies, managed them in initial stages, and then turned them over to professional management. These companies include PRAJA, Virage, and ImageWare. Currently he is working with Krumbs, a situation aware computing company. He has also been advisor to several other companies including some of the largest companies in media and search space.
I will give an overview of some of the work done by the Video Content Analysis team at Google Research in the context of YouTube and Google Photos. I will show examples of features and use cases enabled or assisted by video understanding, and discuss in more detail how we have approached the problem of extracting meaning out of audio-visual signals at massive scales. I'll conclude with a note on data sets, and suggestions of open problem areas with large potential for impact.
Apostol (Paul) Natsev is a software engineer and manager in the video content analysis group at Google Research. Previously, he was a research staff member and manager of the multimedia research group at IBM Research from 2001 to 2011. He received a master's degree and a Ph.D. in computer science from Duke University, Durham, NC, in 1997 and 2001, respectively. Dr. Natsev's research interests span the areas of image and video analysis and retrieval, machine perception, large-scale machine learning and recommendation systems. He is an author of more than 80 publications and his research has been recognized with several awards.
Traditionally, there has been a division of labor between computers and humans where all forms of number crunching and bit manipulations are left to computers; whereas, intelligent decision-making is left to us humans. We are now at the cusp of a major transformation that can disrupt this balance. There are two triggers for this: firstly, trillions of connected devices (the "Internet of Things") that have begun to sense and transform the large untapped analog world around us to a digital world, and secondly, (thanks to Moore's Law) beyond-exaflop levels of compute, making a large class of structure learning and decision-making problems now computationally tractable. In this talk, I plan to discuss real challenges and amazing opportunities ahead of us for enabling a new class of applications and services, "Machine Intelligence Led Services". These services are distinguished by machines being in the 'lead' for tasks that were traditionally human-led, simply because computer-led implementations are about to reach and even surpass the quality metrics of current human-led offerings.
Pradeep Dubey is an Intel Fellow and Director of Parallel Computing Lab (PCL), part of Intel Labs. His research focus is computer architectures to efficiently handle new compute-intensive application paradigms for the future computing environment. Dubey previously worked at IBM's T.J. Watson Research Center, and Broadcom Corporation. He has made contributions to the design, architecture, and application-performance of various microprocessors, including IBM(R) Power PC*, Intel(R) i386TM, i486TM, Pentium(R) Xeon(R), and the Xeon Phi(tm) line of processors. He holds over 36 patents, has published over 100 technical papers, won the Intel Achievement Award in 2012 for _Breakthrough Parallel Computing Research_, and was honored with Outstanding Electrical and Computer Engineer Award from Purdue University in 2014. Dr. Dubey received a PhD in electrical engineering from Purdue University. He is a Fellow of IEEE.
In my presentation I will introduce a kinematic and dynamic framework for creating a representative model of an individual. Building on results from geometric robotics, a method for formulating a geometric dynamic identification model is derived. This method is validated on a robotic arm, and tested on healthy and muscular dystrophy subjects to determine the utility as a clinical tool. In order to capture kinematics of the human body we used Visual observations, either motion capture or the Kinect camera. In order to obtain the dynamical parameters of the individual, we used force plate and force sensors for robot attached to human hand. The work in progress is to use Ultrasound scanner and Acoustic myography in order to estimate the muscle strength
Our current representative kinematic and dynamic model outperformed conventional height/mass scaled models. This allows us for rapid, quantitative measurements of an individual, with minimal retraining required for clinicians.
These tools are then used to develop a prescriptive model for developing assistive devices.
This framework is then used to develop a novel system for human assistance. A prototype device is developed and tested. The prototype is lightweight, uses minimal energy, and can provide an augmentation of 82% for providing hammer curl assistance.
Ruzena Bajcsy (LF’08) received the Master’s and Ph.D. degrees in electrical engineering from Slovak Technical University, Bratislava, Slovak Republic, in 1957 and 1967, respectively, and the Ph.D. in computer science from Stanford University, Stanford, CA, in 1972. She is a Professor of Electrical Engineering and Computer Sciences at the University of California, Berkeley, and Director Emeritus of the Center for Information Technology Research in the Interest of Science (CITRIS). Prior to joining Berkeley, she headed the Computer and Information Science and Engineering Directorate at the National Science Foundation. Dr. Bajcsy is a member of the National Academy of Engineering and the National Academy of Science Institute of Medicine as well as a Fellow of the Association for Computing Machinery (ACM) and the American Association for Artificial Intelligence. In 2001, she received the ACM/Association for the Advancement of Artificial Intelligence Allen Newell Award, and was named as one of the 50 most important women in science in the November 2002 issue of Discover Magazine. She is the recipient of the Benjamin Franklin Medal for Computer and Cognitive Sciences (2009) and the IEEE Robotics and Automation Award (2013) for her contributions in the field of robotics and automation.
Keynote Talk Title: Small, Medium, and Big Data: Application of Machine Learning Methods to the Solution of Real-World Imaging and Printing Problems
Time: 8:30-9:30, Tuesday, Dec. 13, 2016To provide a context for the discussion to follow, I will first briefly review our work with vendors in the printing and imaging area. Then, I will describe a series of problems that illustrate the successful application of machine learning methods to the solution of problems in the printing and imaging space. These problems range from the development of detailed microscale models for printer behavior; to algorithms for print and image quality assessment; to algorithms for predicting aesthetic quality of fashion photographs; to algorithms for detection and recognition of people in home and office settings. The algorithms take a variety of different forms ranging from linear regression, context-dependent linear regression, and context-dependent linear regression augmented by stochastic sample function generation; to maximum likelihood estimation; to support vector machines; to convolutional neural networks. The size of the data sets used to train these algorithms range from tens of images to tens of thousands of images.
Jan P. Allebach is Hewlett-Packard Distinguished Professor of Electrical and Computer Engineering at Purdue University, West Lafayette, Indiana. Imaging has been a central theme of his research. Algorithms developed in his laboratory have been licensed to major vendors of imaging products, and can be found in the drivers, firmware, and hardware (ASICs) of 100s of millions of units that have been sold world-wide. Allebach is a Fellow of IEEE, IS&T (The Society for Imaging Science and Technology), and SPIE. He was elected to membership in the National Academy of Engineering, and the National Academy of Inventors. He received Honorary Membership from IS&T, which is its highest award. He received the Daniel E. Noble IEEE Field Award for Emerging Technologies, and most recently, the 2016 Edwin H. Land Medal from the Optical Society of America and IS&T.
The 12th IEEE International Workshop on Multimedia Information Processing and Retrieval (IEEE-MIPR 2016)
Fifth IEEE International Ph.D. Workshop on Multimedia Computing Research (MCR 2016)
The 11th IEEE International Workshop on Multimedia Technologies for E-Learning (MTEL 2016)
First IEEE Workshop on Multimedia Support for Decision-Making Processes (MuSDeMP 2016)
IEEE ISM Workshop on Cyber-physical Multimedia Systems for Smarter Healthcare (CPMMS-Health2016)