Bioinformatics and Computational Genomics
Abolfazl Hashemi (University of Texas at Austin)
A Tensor Factorization Framework for Haplotype Assembly of Diploids and Polyploids
10:30-10:50, February 16th, Thursday
Complete information about variations in an individual’s genome is given by haplotypes, the ordered lists of single nucleotide polymorphisms on the individual’s chromosomes. Haplotype assembly from short reads obtained by high-throughput DNA sequencing requires partitioning the reads into k clusters, each collecting the reads corresponding to one of the chromosomes. Erroneous sequencing leads to ambiguities regarding the origin of a read and therefore renders haplotype assembly challenging. Most of the recent haplotype assembly methods focus on the Minimum Error Correction (MEC) formulation. In this work, a framework which models the MEC formulation as a tensor factorization problem is established. An iterative algorithm, AltHap, which reconstructs haplotypes of either diploid or polyploid organisms by solving this factorization problem is proposed. The performance and convergence properties of AltHap are analyzed and, in doing so, the first theoretical guarantees on the achievable MEC scores are established. In particular, it is shown that under some conditions on sequencing coverage and error rate, if the algorithm starts the iterations from an appropriately selected initial point, AltHap converges to a stationary point that with high probability is in close proximity of the true sequences.
This a is joint work with Banghua Zhu and Haris Vikalo.
Abolfazl Hashemi is a 3rd year graduate student at the Electrical and Computer Engineering Department at University of Texas at Austin under supervision of Prof. Haris Vikalo. His research interests include machine learning, bioinformatics, and signal processing. In particular, he designs efficient heuristics with provable performance guarantees for NP-hard optimization problems. Abolfazl received the BS degree from Sharif University of Technology, Iran, in 2014, and the MSE degree from UT Austin in 2016. He was also a visiting undergraduate researcher at the Department of Electronic and Computer Engineering at Hong Kong University of Science and Technology under supervision of Prof. Daniel Palomar. Abolfazl was the recipient of Iranian national elite foundation fellowship.
Decision and Control, Systems, and Networks
Nak-seung Patrick Hyun (Georgia Institute of Technology)
Infinitesimal Modeling of Impulsive System: A Nonstandard Analysis Approach
14:30-14:50, February 16th, Thursday
This talk introduces a new framework for modeling nonlinear impulsive systems emphasizing the cause versus effect by using the generalized functions defined on the hyperreal space in Nonstandard Analysis (NSA). The discrete jump equation in classical nonlinear impulsive systems can be eliminated by formulating an equivalent generalized ordinary differential equation (GODE) where its generalized solution displays the same jump behavior. The first task is to construct an algebraically structured extended real space in the hyperreals in order to simplify the space of infinitesimals in NSA. The proposed space is denoted as a Krylov space since its construction is similar to the Krylov subspace method in numerical linear algebra. Next, a generalized piecewise continuous function is defined on the Krylov space using two basic operators, scaling and translation. By introducing an extended differentiation on the space of piecewise differentiable functions that satisfies the Leibniz product rule, we derive a singular delta function on the Krylov space. The proposed framework shows that every generalized function can be differentiated, and the multiplication between generalized function is point-wise well defined. Finally, the GODE is now formulated using the extended differentiation, and the piecewise continuous solution is found in the new generalized function space. A motivational example of a bouncing ball moving on a horizontal surface is analyzed to show the effectiveness.
Nak-seung Patrick Hyun received the B.S. degree in electrical engineering in 2009 from Korea University, and the M.S. degrees in mathematics and electrical engineering in 2013 from the Georgia Institute of Technology. He is currently a Ph.D. candidate in the same institute. His recent research addresses a new framework of causal modeling the impulsive systems using non-standard analysis, and optimal path planning for multi-agent systems. He received the best contribution award in the Decision and Control Laboratory Graduate Student Symposium at Georgia Tech in 2015. He was a TA for electrical engineering and mathematics courses, and won the outstanding graduate teaching assistant of the year award in 2011.
Information Processing Circuits and Systems
Edward Lee (Stanford University)
Space-time Computing for Machine Learning
11:00-11:20, February 17th, Friday
We introduce space-time computing as a methodology that hopes to enable scalable computing for machine learning. This work is organized into two components: A) design and implementation of two CMOS-compatible compute atoms that perform multiply-and-add operations with higher energy efficiencies than conventional digital, and B) design of space-time atoms that balances compute and communication costs. We propose a new neural architecture that limits this communication overhead while maintaining high algorithmic performance and allowing energy efficiencies that are roughly independent with number of operations.
Edward Lee is a PhD student working in efficient and secure computing for deep learning (inference and training) at Stanford. He also collaborates with the School of Medicine on deep learning for lung cancer diagnosis. He obtained his MS in Electrical Engineering at Stanford in 2014 and BSEE from Arizona State University in 2012. He is a Goldwater recipient, and his work is supported by the Texas Instruments SGF and NSF.
Machine Learning and Signal Processing
Wei Yu (Carnegie Mellon University)
AdaDelay: Delay Adaptive Distributed Stochastic Optimization
15:30-15:50, February 17th, Friday
We develop distributed stochastic convex optimization algorithms under a delayed gradient model in which server nodes update parameters and worker nodes compute stochastic (sub)gradients. Our setup is motivated by the behavior of real-world distributed computation systems; in particular, we analyze a setting wherein worker nodes can be differently slow at different times. In contrast to existing approaches, we do not impose a worst-case bound on the delays experienced but rather allow the updates to be sensitive to the actual delays experienced. This sensitivity allows use of larger stepsizes, which can help speed up initial convergence without having to wait too long for slower machines; the provable global convergence rate is still preserved. We conducted experiments on parameter server with different delay patterns, and obtain noticeable improvements for large-scale real datasets with billions of examples and features. The presentation is based on our paper in AISTATS 2016, a joint work with Suvrit Sra from MIT, and Mu Li and Alex Smola from CMU.
(Adams) Wei Yu is a Phd student in Machine Learning Department of Carnegie Mellon University, advised by Jaime Carbonell and Alex Smola. His research interest includes large scale optimization, statistical machine learning and deep learning, with application in natural language processing and demand forecasting. He regularly publishes papers in ICML, NIPS, COLT, AISTATS and VLDB. One of his paper has been selected in INFORMS 2014 Data Mining Best Student Paper Finalist, and his coauthored paper was nominated as Best Paper in ICME 2011. He was a Siebel Scholar of class 2015.