In this project, the investigators will characterize mobile gaming and Deep Learning (esp. Convolutional Neural Networks) workloads (i.e., CPU or GPU dominant workloads in gaming; and loading and inference—convolution, pooling and fully connection layers-- workloads in DL) using collected data such as utilization, frequency, performance metrics and power consumption on the mobile platforms. As a further phase, we build models from a simple yet effective heuristic model based on the characterization to more sophisticated ML enhanced models considering both accuracy and interpretability; and then, we propose Dynamic Power Management (DPM) strategies that orchestrate energy-efficient policies among heterogeneous processors and evaluate the strategies using gaming and DL test datasets. As the last phase, we introduce energy-efficient DL inference optimization through adaptive model selection for mobile and embedded systems based on all the previous observations and analyses.