Photo of Tian Guo

 Tian Guo

Assistant Professor
Computer Science Department
Worcester Polytechnic Institute
100 Institute Road
Worcester, MA 01609
Office: Fuller Labs 138

tian@wpi.edu

(508)831-6860

github.com/belindanju


Current Projects

Mobile-aware Cloud Resource Management mobilescale Modern mobile applications are increasingly relying on cloud data centers to provide both compute and storage capabilities. To guarantee performance for cloud customers, cloud platforms usually provide dynamic provisioning approaches to adjusting resources, in order to meet the demand of fluctuating workload. Modern mobile workload, however, exhibits three key distinct characteristics, i.e., new type of spatial fluctuation, shorter time scale of fluctuation, and more frequent fluctuation, that make current provisioning approaches less effective.The MOBILESCALE project proposes new research on resource management for mobile workload that differs significantly from traditional cloud workload.

Efficient Mobile Deep Inference modi An ever-increasing number of mobile applications are leveraging deep learning models to provide novel and useful features, such as real-time language translation and object recognition. However, current mobile inference paradigm requires application developers to statically trade-off between inference accuracy and inference speed during development time. As a result, mobile user experience is negatively impact given dynamic inference scenarios and heterogeneous device capacity. The MODI project proposes new research in designing and implementing a mobile-aware deep inference platform that combines innovations in both algorithm and system optimizations.

Previous Projects


Cloud Cost Reduction: It is always about money, isn't it?
One key benefit of cloud platforms is the ability to acquire resources on demand to handle peak workload. For enterprises with existing IT infrastructures, it is not clear how to make transitions to cloud platforms cost-effectively. I built a cloud bursting system that answers questions, such as when to and how much workload to move from private data center to public clouds, and automates such processes. This system serves as a building block for investigating problems such as pooling cloud resources.
Of course, as cloud customers(people that host cloud services of some sorts), we are constantly facing with cost and performance trade-offs. Can we get away with 10 servers, 100 servers or even 1000 servers? But What about our monthly bill? Yes, budgets is a real-world problem. Well, to help out, I did some works to exploit a type of very cheap but volatile cloud resources, e.g., spot servers. The high level idea is to provide system/application-level mechanisms that allow customers run interactive, batch and batch-interactive applications on spot servers as long as possible. Such mechanisms are guided by our risk-aware cost-effective policies.


Mobility Support in the Cloud: Providing better services for mobile users.
Traffics from mobile devices have surpassed other types and start to dominate cloud traffics. Mobile devices' capabilities are still limited by their battery life despite growing computation and storage. Also, today's mobile devices are equipped with at least two network interfaces, one for WiFi and the other for cellular. Well, not just about devices. Mobile users also tend to move around and introduce mobility problem. The bottom line is mobile traffics possess some unique characteristics that render current cloud platforms' support powerless. I think it is very interesting to enhance cloud platforms' supports for mobile applications by taking into account their unique characteristics. As part of my effort, I worked on VMShadow that moves cloud applications closer to users thus improving performance. The techniques proposed in VMShadow is general and could potentially be used for mobility problems. And I would love to work more on this area.


Geo-elasticity for Global Workload: I say automation is always preferred.
As applications serve more users from geographically distributed locations, their workloads exhibit spatial dynamics in addition to temporal one. What exactly is spatial workload dynamic? Well, just imagining a global application that needs to handle different amount of requests from various locations on a particular day. For example, application traffics from United States might spike on Black Friday while traffics from China might increase on Single's day, i.e. 11.11. Handling workload dynamics effectively require provisioning enough server resources on data centers that are closer to workload spikes. Manually selecting cloud sites and setting up servers is tedious and time-consuming. So, I worked on two systems, DBScale and GeoScale, that automate the scaling processes across geographically distributed data centers. Such ability is referred as geo-elasticity and is an important step towards a mobile-aware cloud platform.


Big Data Analytic Framework: Towards faster and cheaper data processing.
Today data are literally everywhere, being collected, transferred and analyzed in cloud platforms. Tasks ranging from simple aggregation to interactive machine learning benefit from having access to a large amount of server resources and running in parallel. However, the trade-off between how many servers to use and how long to wait are often constrained by budgets. Luckily, cloud providers rolled out cheaper resources that potentially allow us to rent 10x more servers with same cost. But such low costs are associated with risks of losing servers only within minutes warning. I worked on SpotOn and Flint that manage such risks for batch and interactive applications. I believe mechanisms underlying these two systems are useful for providing cost-effective big data analytic in the cloud environment.


Green Data Center: It is not just about cutting electricity bills.
Data centers are consuming humongous amount of electricity every year. As more data centers are built, electricity consumptions will continue to increase if no precautions are taken. Companies like Google, Facebook and Apple are constantly working on improving data center energy efficiency and publishing their power usage effectiveness (PUE) values. Because such PUEs often correspond to yearly average, they contain very limited information about how energy is consumed inside each data center. To gain insights of energy efficiency, my co-authors and I analyzed MGHPCC, a state-of-the-art 15MW green data that incorporates many of the technological advances used in commercial data centers. I believe such insights are useful in facilitating research community to design and evaluate new energy-efficient optimizations.