Research Overview

The today’s world is characterized by a massive growth of information-thirsty human-driven devices as well as smart machines that act without human interventions. In a dynamic and stochastic environment, and while being uncertain about the actions of their counterparts, the devices and machines engage in real-time communications in order to fulfill some specific goals, thereby creating complex dynamic networks and producing excessive amounts of data. Our research group contributes to the analysis and optimization of such systems. From the theoretical perspective, the developed analytical methods lie at the intersection of game theory, artificial intelligence, and data science. Research directions are:

  • Online Decision-Making with Limited Feedback: This is a class of sequential optimization problems, where, given a set of actions, a player selects one action at each round in order to receive some reward. The rewards are not known to the player in advance; however, upon playing any action, the player observes some feedback. In such an unknown setting, at each round, the player may lose some reward (or may incur some cost) due to not selecting the best action instead of the chosen action. The player decides which action to take in a sequence of trials so that its accumulated loss over the horizon is minimized, or its discounted reward over the horizon is maximized. Reinforcement learning, inverse reinforcement learning, active learning, and transfer learning belong to this category.
  • Interaction and Playing Games under Uncertainty: The aforementioned challenges can be extended to the multi-agent setting. In essence, in many real-world problems, a number of entities affect each other by their decisions, while being uncertain about each other’s characteristics, including abilities and preferences. Moreover, the environment can have some random and a priori unknown state, on which the agents and their decisions depend. Therefore, the agents engage in interactions with each other and with the environment, to learn the environment and the way they impact the utility of each other. Through learning, the agents shall come up with some decisionmaking strategy whose outcome is some sort of efficient equilibrium. Multi-agent reinforcement learning, mechanism design under uncertainty, persuasion, and similar problems fall into this category.

Application-wise, the research group is focused on distributed control and resource allocation in edge computing, caching, Internet of Things, software-defined networking, UAV networks and next generation wireless networks.