# Illinois Students

## Bioinformatics and Computational Genomics

### Abstract

Understanding the relationship between protein structure and function is a fundamental problem in protein science.  Given a protein with unknown functions, fast identification of similar protein structures from the Protein Data Bank (PDB) is a critical step for inferring its functions. Such structural neighbors may provide evolutionary insights into protein folds and functions that are not detectable from sequence similarity.  Performing pairwise structural alignments against all structures in PDB is prohibitively expensive. Alignment-free approaches were introduced to enable fast but coarse comparisons by representing each protein as a vector of structure features or fingerprints and computing similarity between vectors, with comparable performance to some structural alignment algorithms. As a notable example, Fragbag represents each protein by a “bag of fragments”, which is a vector of frequencies of contiguous short backbone fragments from a predefined library. However, the performance of Fragbag is not satisfactory, because its backbone fragment library may not be optimally selected, and more importantly, the long-range interacting patterns are  ignored. Here we present a new approach to learn effective structural motif presentations using deep learning. We develop DeepFold, a deep convolutional neural network model to extract structural motif features of a protein structure from its $C_\alpha$ pairwise distance matrix. Similar to Fragbag, we represent each protein structure/fold using a vector of the learned motifs and perform the structure search by only computing vector similarity.  This neural network is trained in a supervised manner by fitting TM-score, a structural similarity score, to discriminate similar and dissimilar template structures of a query. We demonstrate that DeepFold greatly outperforms Fragbag on protein structure search on a non-redundant protein structure database and a set of newly released PDB structures, in terms of both search accuracy and efficiency for computing structural representations. Remarkably, DeepFold not only extracts meaningful backbone segments but also identifies important long-range interacting structural motifs for structural comparison. We expect that DeepFold will provide new insights on the evolution and hierarchical organization of protein structural motifs.

### Bio

Yang Liu is a second-year Ph.D. student in department of computer science, University of Illinois at Urbana-Champaign.  Previously, He received his bachelor degree in computer science at Tsinghua Unviersity. His current research interest lies in deep learning for scientific problems and deep reinforcement learning. In his previous work, he applied deep neural network to model the protein structure for both effective and efficient searching. He also developed algorithm to achieve more effective performance on modern RL tasks like Atari games.

### Abstract

Complex diseases have been associated with altered gene expression due to changes in regulatory DNA sequences. However, the precise, quantitative relationship between gene expression and their corresponding regulatory sequences is not fully understood. Recent technological breakthroughs have made high-throughput genomics data widely available. The next frontier is to use the data to understand gene regulation. Here, we are particularly interested in two types of biological data. Firstly, chromatin accessibility data show the condensed and accessible regions in DNA sequences, and therefore provide a genome-wide picture of active DNA regions. We ask that does accessibility information improve our ability to quantitatively predict expression? Secondly, DNA shape describes the local molecule structure, and can perfectly compensate to information lack from sequence. Hence, we ask if DNA shape can noticeably improve prediction of gene expression? We systematically and empirically answer these two questions, which are the main subjects of this study. We developed a sequence-to-expression model to better elucidate gene regulation in a biologically intuitive and accurate manner. Specifically, we utilized machine learning and statistical framework to predict fruit fly (Drosophila) gene expression from regulatory sequences. Results have shown that accessibility and DNA shape data explain the data with higher accuracy. Our work demonstrates how integration of heterogeneous data can be useful in sequence to expression modeling. With the growing availability of data sets, we expect that it will be possible to train such models more accurately and to better understand the molecular mechanisms to gene expression for which sequence alone cannot account.

### Bio

Pei-Chen Peng is a PhD candidate in Computer Science at UIUC, advised by Prof. Saurabh Sinha. She is interested in computational approaches to problems in molecular biology. Her works focus on designing computational models, using biophysical principles and machine learning methods, to understand gene regulation. She was awarded Google Anita Borg Memorial Scholarship in 2012.

### Abstract

Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification for sequences that are distantly related to those in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. Moreover, for metagenomic samples drawn from sources without a wealth of existing data, such as extreme environments, identification of evolutionarily distant family members from fragmentary sequences is exactly the key challenge. We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification). HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub athttps://github.com/smirarab/sepp.

### Bio

Michael is a graduate student in the Department of Statistics at the University of Illinois at Urbana-Champaign and is advised by Professor Tandy Warnow. As a member of Dr. Warnow’s lab he has developed new methods for large-scale multiple sequence alignment and applications of molecular phylogenetics to detection problems in biology, specifically metagenomics. Michael is a current CompGen fellow and is being co-advised Dr. Warnow and Professor Rebecca Stumpf. Under the fellowship, he is participating in active research on the primate microbiome, using research questions and data from field studies to guide the development of targeted computational methods.

### Abstract

By now, it has been widely recognized that DNA-based information alone is not sufficient for many biomedical studies, and that diverse -omics sources of evidence are required instead. In this context, data pertaining to protein-DNA interactions and their influence on gene expression regulation is of special importance. Chromatin immunoprecipitation-sequencing (ChIP-seq) is an inexpensive DNA sequencing technique for quantifying DNA-protein interactions which combines ChIP with parallel DNA sequencing to identify the binding sites of DNA-associated proteins. As a result, ChIP-seq represents an important project area supported by the Encyclopedia of DNA Elements (ENCODE) project and the National Institute of Health (NIH). The emergence of large numbers of ChIP-seq data files has caused new challenges regarding data storage, data transfer, and exchange.  To tackle this problem, different data compression techniques have been proposed to reduce the size of ChIP-seq files. The most common compression algorithms including bigWig and cWig often use the general purpose gzip algorithm as the primary compressor. In contrast, a recently developed compression algorithm for RNA-seq, termed smallWig, uses specialized statistical analysis coupled with arithmetic and context-tree weighted coding. As a result, smallWig offers order of magnitude better compression rate (defined as the ratio of the compressed and uncompressed file size) than those of bigWig, cWig and gzip. Unfortunately, smallWig does not represent an efficient compression tool for ChIP-seq data, as the statistics of binding affinities in ChIP-seq tracks differs substantially from that of the expression data in RNA-seq files. In order to address this issue, new statistical modeling approaches are needed for protein binding counts, an accompanying transform encoding techniques, such as iterative delta encoding, run-length encoding and arithmetic encoding. We propose a new lossless compression method especially designed for ChIP-seq wig data, termed ChIPWig. ChIPWig offers significantly better compression rates than standard bigWig and gzip methods. ChIPWig also offers random access functionalities which enable fast queries from the compressed file. To enable random access features, ChIPWig performs careful block-wise encoding and merging of all encoded blocks. Unlike bigWig and cwig compressors which operate with fixed block sizes, ChIPWig enables the use of variable-length block sizes that may be chosen by the user. In the random access mode, ChIPWig also stores the summary statistics of each block. The proposed compression model in ChIPWig can be generalized for compression of other genomic data formats, and it complements the recently introduced smallWig platform for RNA-seq data. We tested the ChIPWig compressor on a number of ChIP-seq datasets generated by the ENCODE project. The results reveal that ChIP-Seq offers a 5 to 6-fold decrease in file size compared to bigWig, and a 4-fold and 2-fold improvement compared to gzip and cWig, respectively. The gain in compression efficiency compared to Wig files is 14-fold. The running times for the compression and decompression of the ChIP-seq files are comparable to those of bigWig. As an example, the compression and decompression speeds are 0.226 MB/sec and 0.21284 MB/sec, respectively. The ChIPWig platform with random access leads to a slight increase in compression rate when compared to the standard mode, typically from 0.007 to 0.04. ChIPWig in the random query mode leads to a modest increase from 0.07 to 0.3 in the compression time rate and an increase from 0.008 to 0.08 for the decompression time rate when compared to the standard mode.

### Bio

Vida Ravanmehr received her bachelor’s degree and master’s degree in Applied Mathematics from Isfahan University of Technology, Isfahan, Iran in 2005 and 2008, respectively. She got her PhD in Electrical and Computer Engineering from the University of Arizona in 2015. She is now a post-doctoral research associate at the Coordinated Science Lab (CSL) at the University of Illinois, Urbana-Champaign. Her research interests are Coding Theory, Genomic data compression, LDPC codes and message-passing algorithms.

## Decision and Control, Systems, and Networks

### Abstract

In congested vehicular traffic, small disturbances or fluctuations in the velocity of a single vehicle may induce dynamically evolving traffic instabilities such as stop-and-go waves. This work demonstrates in simulation that through intelligent control of a small number (e.g.,1-5%) of autonomous vehicles (AVs) soon to be present in the traffic flow, it is possible to dampen and in some cases completely remove these traffic waves in the entire traffic stream. We also present some recent experimental results.

### Bio

Raphael Stern is a PhD student in Civil Engineering at the University of Illinois studying sustainable and resilient infrastructure systems in the lab of Dr. Dan Work. Raphael’s primary research interests include autonomous vehicles and traffic control, particularly in systems with mixed human-piloted and autonomous vehicles. In 2015 Raphael was a fellow at the University of California at Los Angeles’ Institute for Pure and Applied Mathematics. Raphael has won numerous awards including the Eisenhower Transportation Fellowship and the Best Presentation Award at the International PhD Student Symposium in Harbin, China.

### Abstract

A particle filter is introduced to numerically approximate a solution of the global optimization problem. The theoretical significance of this work comes from its variational aspects: (i) the proposed particle filter is a controlled interacting particle system where the control input represents the solution of a mean-field type optimal control problem; and (ii) the associated density transport is shown to be a gradient flow (steepest descent) for the optimal value function, with respect to the Kullback–Leibler divergence. The optimal control construction of the particle filter is a significant departure from the classical importance sampling-resampling based approaches. There are several practical advantages: (i) resampling, reproduction, death or birth of particles is avoided; (ii) simulation variance can potentially be reduced by applying feedback control principles; and (iii) the parametric approximation naturally arises as a special case. The latter also suggests systematic approaches for numerical approximation of the optimal control law. The theoretical results are illustrated with numerical examples.

### Bio

Chi Zhang received the B.E. degree in Automotive Engineering from Tsinghua University, Beijing, China, in 2011 and the M.S. degree in Mathematics from the University of Illinois at Urbana-Champaign in 2014. He is currently pursuing the Ph.D. degree in the Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign. His research interests include particle-based nonlinear filtering and global optimization, and their generalization to Riemannian manifolds.

### Abstract

Distributed optimization over networks have received a surge of studies during the last decade due to their wide applications in many areas, including resource allocation in wireless sensor networks, big data analysis in machine learning, and coordinated optimization in power networks. In this problem, a group of n nodes communicating over a dynamic network wish to cooperatively solve an optimization problem, where the objective function is the sum of local functions known by the nodes. Due to the large scale of systems with complicated inter-interactions between nodes, distributed methods, where the nodes are only allowed to send/exchange messages with their neighbors, are preferable to their counterpart, centralized methods.

In this talk, I first present distributed gradient-based consensus methods, the most well-studied methods in the literature for distributed optimization. I will talk about our recent result, where we address an open problem of distributed optimization, namely, the performance of distributed gradient with communication delays. Second, I will show how to utilize distributed gradient to solve resource allocation problems, a fundamental and important problem arising in a variety of application domains. Specically, I will present a novel approach for this problem, namely, distributed Lagrangian methods. Finally, I provide simulations to illustrate the correctness of our theoretical results and to show the efectiveness of our approach in solving the well-known economic dispatch problems.

### Bio

Thinh T. Doan is a fourth year Ph.D. student in the Department of Electrical and Computer Engineering (ECE) at UIUC, working with Professor Carolyn Beck. He was born in Vietnam, where he did his undergraduate in ECE at Hanoi University of Science and Technology. Before joining UIUC, he received a master degree in ECE at the University of Oklahoma, US, in 2013. His research interests span on the intersection of control theory and optimization. Recently, he has been focusing on studying distributed optimization with applications on Machine Learning, Network Resource Allocation, and Coordinated Optimization in Power Networks.

### Abstract

Control and path planning for under-actuated system has always been a challenging topic in research. While it is difficult to achieve an optimal trajectory which is referred as the geodesic that minimizes certain cost functional, an alternated way for finding geodesic is proposed via solving a set of heat flow equations with penalty in the infeasible control direction. The advantage that variables are separated in the system of heat flow equations makes it suitable for finding solutions numerically. It can been shown that the overall cost resulted in this method can be made arbitrarily closer to the optimal cost and the actual system trajectory is in the proximity of a geodesic by setting the penalty sufficiently large. In addition, this heat flow method can be extended to obstacle avoidance and it is demonstrated in a unicycle example.

### Bio

Shenyu Liu is a second year Ph.D. student in the Department of Electrical and Computer Engineering at UIUC, working with Professor Daniel Liberzon and Professor Belabbas Ali. He was born in China, while he accomplished bachelor’s degree in Mechanical Engineering and Mathematics at University of Singapore, Singapore before he came to UIUC. He then received master’s degree in Mechanical Science and Engineering at UIUC in 2015. Shenyu’s research interest is mainly on control theory. Recently he has been focusing on the study of almost Lyapunov function, input-to-state stability of switching systems and geometric control of under-actuated systems.

### Abstract

Wholesale electricity market designs in practice do not provide the market participants with ad- equate mechanisms to hedge their financial risks. Demanders and suppliers will likely face even greater risks with the deepening penetration of variable renewable resources like wind and solar. This presentation explores the design of a centralized cash-settled call option market to mitigate such risks. A cash-settled call option is a financial instrument that allows its holder the right to claim a monetary reward equal to the positive difference between the real-time price of an un- derlying commodity and a pre-negotiated strike price for an upfront fee. Through an example, we illustrate that a bilateral call option can reduce the payment volatility of market participants. Then, we design a centralized clearing mechanism for call options that generalizes the bilateral trade. We illustrate through an example how the centralized clearing mechanism generalizes the bilateral trade. Finally, the effect of risk preference of the market participants, as well as some generalizations are discussed.

### Bio

Khaled is a PhD candidate at the Department Electrical and Computer Engineering Department at the University of Illinois at Urbana-Champaign. He’s primarily interested in applying control and game-theoretic methods in energy systems.

## Information Processing Circuits and Systems

### Abstract

This work proposes deep in-memory processor to achieve higher throughput and energy efficiency for Machine Learning applications. This architecture employs low-swing/low-SNR analog processing at component level to achieve aggressive energy efficiency without affecting the system accuracy due to algorithms’ inherent error resiliency. This concept is validated by two silicon IC prototypes for 1. Multi-functional inference engine, and 2. Random Forest, achieving 56X smaller energy-delay product without accuracy loss.

### Bio

Mingu Kang received the B.S. and M.S. degrees in electrical and electronic engineering from Yonsei University, Seoul, Korea, in 2007 and 2009, respectively. From 2009 to 2012, He worked at the Memory Division, Samsung Electronics, Hwaseong, Korea, in 2009, where he was engaged in the circuit and architecture design of Phase Change Memory (PRAM). He is currently pursuing the Ph.D. degree in electrical and computer engineering at University of Illinois at Urbana-Champaign. His research interests include low-power VLSI integrated circuits/architecture/system of programmable hardware accelerators for machine learning and pattern recognition applications.

### Abstract

Machine Learning (ML) algorithms underpin a range of perceptual applications such as computer vision and machine listening. However, these ML algorithms tend to be computationally intensive and create challenges for use modes involving real-time execution or mobile-form-factor power constraints. As we near the end of the silicon roadmap, exploration of custom accelerator architectures is increasingly an attractive alternative to simple core-level parallelism. This project explores novel architectures for accelerators for sampling-based probabilistic inference, focusing on power and resilience. As a case study, we explore sound source separation to isolate human voice from background noise on mobile phones. The challenges involved are real-time execution and power constraints. As a solution, we present a novel hardware-based sound source separation FPGA and ASIC implementations capable of real-time streaming performance.

### Bio

Glenn G. Ko is a PhD candidate with the Department of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign, where his advisor is Dr. Rob A. Rutenbar, Head of the Department of Computer Science. He received B.S and M.S. in Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign in 2004 and 2006, respectively. Prior to starting his PhD, Glenn was a design engineer at Samsung Electronics for four years, where he engaged in research and development of Samsung Exynos, the ARM-based application processor SoCs that power Samsung Galaxy S smartphone line. During his PhD, Glenn has interned at Qualcomm Research and IBM T.J. Watson Research Center, on accelerator architectures research. Glenn’s current research interests include low- power design/architecture, machine learning, probabilistic graphical models, and neural networks. He is currently working on stochastic accelerator cores and energy-efficient inference cores for graphical models and neural networks.

### Abstract

Convolutional neural networks (CNNs) have gained considerable interest due to their state-of-the-art performance in many recognition tasks. However, the computational complexity of CNNs hinders their application on power-constrained embedded platforms. In this paper, we propose a variation-tolerant architecture for CNN capable of operating in near threshold voltage (NTV) regime for energy efficiency. A statistical error compensation (SEC) technique referred to as rank decomposed SEC (RD-SEC) is proposed. RD-SEC is applied to the CNN architecture in NTV in order to correct timing errors that can occur due to process variations. Simulation results in 45nm CMOS show that the proposed architecture can achieve a median detection accuracy Pdet >= 0.9 in the presence of gate level delay variation of up to 34%. This represents an 11x improvement in variation tolerance in comparison to a conventional CNN. We further show that RD-SEC-based CNN enables up to 113x reduction in the standard deviation of Pdet  compared with the conventional CNN.

### Bio

Yingyan Lin received the B.S. and M.S. degrees in electrical engineering from Huazhong University of Science and Technology, China. Since August 2012, she has been pursuing the Ph.D. degree in electrical and computer engineering at the University of Illinois at Urbana-Champaign. Her research interests are in the design of high-speed mixed signal circuits and low power error resilient integrated circuits for machine learning and signal processing.

### Abstract

In this talk, Wei will present her work in developing tools for accurate system-level modeling and efficient hardware-software partitioning for an SoC application. She will present an automated SystemC generation and design space exploration flow for hardware accelerator design space exploration, which takes C/C++ as input and outputs the Pareto-optimal solution points considering power and latency trade-offs. Using this flow, they built an automated hardware-software partitioning flow, which partitions the SoC application under power and area constraints to minimize the overall program latency. The flow focuses on building a scalable compile-time partitioning algorithm while considering large sets of alternative hardware and software implementations for a particular application region. Experimental results demonstrate the capability of our approach to consider complex designs and yet output near-optimal partitioning decisions.

### Bio

Wei Zuo is a 4th year PhD student at the Electrical and Computer Engineering department under the supervision of Prof. Deming Chen. Her research interest involves optimizations of high-level synthesis (HLS) and SoC modeling techniques and algorithms. Her research received the Best Paper Award in the ICCAD’15 for the work on accelerator modeling for SoC design, and the Best Paper Award in CODES+ISSS’13 for the work on improving polyhedral code generation for high-level synthesis. Wei received the B.S. degree in electrical and electronics engineering from Beijing Institute of Technology, China, and the M.S. degree in electrical and computer engineering from University of Illinois at Urbana-Champaign.

## Machine Learning and Signal Processing

### Abstract

Throughout music history, theorists have identified and documented rules that capture the decisions of composers. This line of research asks, “Can a machine behave like a music theorist?” It presents MUS-ROVER, a self-learning system for automatically discovering rules from symbolic music. MUS-ROVER performs feature learning via n-gram models to extract compositional rules — statistical patterns over the resulting features. We evaluate MUS-ROVER on Bach’s (SATB) chorales, demonstrating that it can recover known rules, as well as identify new, characteristic patterns for further study. We discuss how the extracted rules can be used in both machine and human composition.

### Bio

Haizi Yu is a Ph.D. student in the Department of Computer Science. He received his M.S. degree in Computer Science from Stanford University, and his B.S. degree from the Department of Automation at Tsinghua University. His research interest include machine learning, interpretable feature learning, automatic knowledge discovery, and music intelligence.

### Breaking the Bandwidth Barrier: Geometrical Adaptive Entropy Estimation

Estimators of information theoretic measures such as entropy and mutual information are a basic workhorse for many downstream applications in modern data science. State of the art approaches have been either geometric (nearest neighbor (NN) based) or kernel based (with a globally chosen bandwidth). In this paper, we combine both these approaches to design new estimators of entropy and mutual information that outperform state of the art methods. Our estimator uses local bandwidth choices of k-NN distances with a finite k, independent of the sample size. Such a local and data dependent choice improves performance in practice, but the bandwidth is vanishing at a fast rate, leading to a non-vanishing bias. We show that the asymptotic bias of the proposed estimator is universal; it is independent of the underlying distribution. Hence, it can be pre-computed and subtracted from the estimate. As a byproduct, we obtain a unified way of obtaining both kernel and NN estimators. The corresponding theoretical contribution relating the asymptotic geometry of nearest neighbors to order statistics is of independent mathematical interest.

### Bio

Weihao Gao is a Ph.D. candidate at the Depratment of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign (UIUC) since 2016. He is working with Prof. Sewoong Oh and Prof. Pramod Viswanath. He received his B.E. in Computer Science from Institute for Interdisciplinary Information Sciences at Tsinghua University in 2014, and will receive his M.S. from the Depratment of Electrical and Computer Engineering at University of Illinois at Urbana-Champaign in 2016. His research interest contains machine learning, information theory, statistics and natural language processing.

### Abstract

Unsupervised video object segmentation aims at differentiating the foreground objects from background using video data and no human specified input. It is an important technique for a wide variety of applications in video analysis. In this work, we propose a new unsupervised video segmentation algorithm which combines the advantages of a non-local diffusion process with extracted local information about edges and ﬂow. Compared to existing approaches that focus only on estimating foreground likelihood, we show that modeling the background gives moderate performance boost. In addition, we employ a CRF based post-processing which further improves the results. We validate the effectiveness of our algorithm on the challenging and recently introduced DAVIS dataset and demonstrate results outperforming state-of-the-art methods by more than 4%, coming close to the performance of semi-supervised techniques.

### Bio

Yuan-Ting Hu is a graduate student in the Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign. She received her B.S. and M.S. in Computer Science from National Taiwan University. Her research interests include computer vision and machine learning. She is particularly interested in video analysis and its applications.

### Abstract

The use of flying robots has increased in the past few decades. However, using flying robots in proximity to humans raises issues. For example, flying robots must consider personal boundaries and perceived safety when operating in the presence of co-located humans. At present, there is little evidence describing optimizations for trajectory generation to minimize discomfort in human observers. To address this issue, we developed a virtual reality test environment for evaluating physiological arousal in humans in response to various drone trajectories. We employ a multi-method approach by incorporating behavioral measures, self-report questionnaires, and biometric data to characterize comfort and perceived safety. Data were collected and analyzed using statistical methods to determine which aspects of the drone’s behavior were most salient to human observers. A dynamic model of the human’s perceived safety is required to determine an optimal trajectory for the flying robots; dynamics of human safety perception needs to be considered in the cost to be optimized. To do this, a recurrent neural network (RNN) model is trained from the experimental data. The data includes trajectories of the flying robots as input to RNN and physiological arousal signals from human subjects as output from RNN. To train the RNN, Bayesian neural network (BNN) approach was used, because BNN considers uncertainties of modeling so that confidence of the prediction can be determined [1]. This confidence metric is useful for motion planning because the planning algorithm verifies if the RNN’s prediction is reliable. For example, when the trajectory being considered is not consistent with the training data, the algorithm needs to use alternative ways to estimate humans perceived safety from the RNN model. Finally, RNN performance will be validated by comparing with the statistical analysis described above. Results from the statistical analysis and RNN will be used for optimal trajectory generation in flying robots operating in the presence of co-located human observers.

[1] Gal, Yarin, Uncertainty in Deep Learning, Ph. D. Thesis, 2016, University of Cambridge

### Bio

Hyung-Jin Yoon was born in Seoul, South Korea, in 1981. He received the B.E. degree in mechanical engineering from the Hanyang University in Seoul, South Korea in 2006, and the M.E degree in electrical engineering from the Sungkyunkwan University of Seoul, South Korea in 2013. In 2006, he joined Hyundai Motor Company, as a research engineer and worked in electric car development. Since 2013, he has been pursuing Ph.D. degree in mechanical engineering of Univerisity of Illinois at Urbana-Champaign. His current research interests include data-driven model and control for human-robot interaction.