AAAI-2019 Workshop
  • Home
  • Call for Papers
  • Schedule
  • Invited Speakers
  • Organizers

Schedule.

Workshop Day: January 28, 2019 (Hibiscus 1)
 The full workshop proceedings are published on arXiv.
 
Workshop Details:
  • ​Includes three sessions:
    • Games & Environments (G/E) - 2 invited talks and 2 oral paper presentations
    • Autonomous Vehicles & Robotics (AV/R) - 2 invited talks and 1 oral paper presentation
    • Vision & Language (V/L) - 2 invited talks
  • Abstracts for invited talks are listed following the Schedule section on this page.
  • The bios for all invited speakers can be found in the next page: Invited Speakers.

Schedule

Time
Session
Presenter(s)
Talk / Paper Title
​9:00 - 9:25
Workshop Introduction
Marwan Mattar & Danny Lange
Welcome Message and Overview of Simulations for AI [slides]
​9:25 - 10:00
​Invited Talk (G/E)
Yuandong Tian
Building Scalable Framework and Environment for Reinforcement Learning [slides]
​​10:00 - 10:15
Paper (G/E)
Arthur Juliani
The Obstacle Tower: An Generalization Challenge in Vision, Control and Planning [slides][paper][arXiv][GitHub]
​​10:15 - 10:30
​Paper (G/E)
Raluca Gaina
Efficient Evolutionary Methods for Game Agent Optimisation: Model-based is Best [slides][paper][arXiv][code]
​10:30 - 11:00
Coffee Break
-
-
​11:00 - 11:35
Invited Talk (G/E)
Julian Togelius
Video Games as Environments for Learning and Planning: The Final Frontier? [slides]
​11:35 - 12:10
​Invited Talk (AV/R)
Shital Shah
Developing Autonomous Agents for the Open World using Simulation [slides]
12:10 - 12:25
​Paper (AV/R)
Ervin Teng
Learning to Learn in Simulation [slides][paper][arXiv]
12:30 - 2:00
Break for Lunch
-
-
2:00 - 2:35
​Invited Talk (AV/R)
Maciek Chociej
High-performance Rendering for Reinforcement Learning [slides]
2:35 - 3:35
Poster Session
All Authors
  1. Agents-based Adaptive Level Generation for Dynamic Difficulty Adjustment in Angry Birds [paper][arXiv]
  2. Dungeon Crawl Stone Soup as an Evaluation Domain for Artificial Intelligence [paper][arXiv]
  3. Efficient Evolutionary Methods for Game Agent Optimisation: Model-based in Best [slides][paper][arXiv][code]
  4. Learning to Learn in Simulation [slides][paper][arXiv]
  5. Marathon Environments: Multi-agent Continuous Control Benchmarks in a Modern Video Game Engine [paper][arXiv][code]
  6. Situational Grounding with Multi-model Simulations [paper][arXiv]
  7. The Obstacle Tower: A Generalization Challenge in Vision, Control and Planning [slides][paper][arXiv][GitHub]
3:15 - 3:45
Coffee Break
-
-
3:45 - 4:20
​Invited Talk (V/L)
Manolis Savva
Towards an Embodied 3D Simulation Platform [slides]
4:20 - 4:55
​Invited Talk (V/L)
Yoav Artzi
Studying Natural Language in Simulated Environment [slides]
​​4:55 - 5:00
Closing Remarks
Marwan Mattar
-

Invited Talks

Yuandong Tian (Facebook AI Research)
  • Title: Building Scalable Framework and Environment for Reinforcement Learning
  • Abstract: Deep Reinforcement Learning (DRL) has made strong progress in many tasks that are traditionally considered to be difficult, such as complete information games, navigation, architecture search, etc. Although the basic principle of DRL is quite simple and straightforward, to make it work often requires substantial efforts, compared to traditional supervised training. In this talk, we introduce our recent open-sourced ELF platform: efficient, lightweight and flexible frameworks to facilitate DRL research. We show the scalability of our platforms by reproducing and open sourcing AlphaGoZero/AlphaZero framework using 2000 GPUs and 1.5 weeks, achieving super-human performance of Go AI that beats 4 top-30 professional players with 20-0. We also show usability of our platform by training agents in real-time strategy games with only a small amount of resource. The trained agent develops interesting tactics and is able to beat rule-based AIs by a large margin. On the environment side, we propose House3D that makes multi-room navigation easy with fast frame rate. With House3D, we show that model-based agent that plans ahead with uncertain information navigate in unseen environments more successfully.

Julian Togelius (New York Univ.)
  • Title: Video Games as Environments for Learning and Planning: The Final Frontier?
  • Abstract: In the last two decades, video games have increasingly complemented and even supplanted board games as testbeds for AI research. In
    particular, such games have been used as environments for learning and planning. I sketch a very brief history of this development, focusing
    on the characteristics of these games. I then speculate on what characteristics the next generation of video game-based AI benchmarks will have. In other words, what haven't we solved yet?

Manolis Savva (Facebook AI Research & Simon Fraser Univ.)
  • Title: Towards an Embodied 3D Simulation Platform

Yoav Artzi (Cornell Univ.)
  • Title: Studying Natural Language in Simulated Environment
  • Abstract: Natural language understanding in grounded interactive scenarios is tightly coupled with the system actions and its observations. Simulated environments provide an accessible framework to study such problems. Agents can easily experiment with executing instructions and observe the outcomes of their actions. In this talk, I will review our recent work on grounded natural language understanding using simulated environments. I will demonstrate the diversity of natural language problems that can be studied in such environments using our recent work, including on block manipulation, navigation, and execution of household instructions. Finally, I will discuss two recent environments that emphasize making different aspects of the problem more realistic. The first focuses on realistic agent control, and the second real-life observations. I will discuss formulating natural language tasks for both scenarios, and how to address the learning and representation challenges raised by such complex environments. All the environments and corpora described in this talk are publicly available. 

Maciek Chociej (OpenAI Robotics)
  • Title: High-performance Rendering for Reinforcement Learning
  • Abstract: Sim2real transfer is one of the central problems of ML driven robotics - we can solve hard & complex tasks in simulation, but the policies fail when applied in the real world. To generalize better, both in case of physical simulation and computer vision, we rely on massive scale and domain randomization. I will be talking about our rendering backend called ORRB. We used it extensively to train Dactyl - the robotic hand that is able to perform dexterous manipulation of physical object. Its computer vision system was trained exclusively on synthetic data, thus high rendering throughput, ability to render multiple different visual phenomena, and the quality of rendering were key to close the reality gap.

Shital Shah (Microsoft Research)
  • Title: Developing Autonomous Agents for the Open World using Simulation
  • Abstract: Simulation is rapidly becoming a critical tool for the modern AI research and engineering. Ability to perform accurate simulations using physics with photorealistic rendering can help development and testing of computer vision, reinforcement learning and deep learning algorithms by making it an order of magnitude cheaper and faster. In this talk, we will explore various aspects of building such simulator through our journey developing AirSim, a physically and visually realistic simulator leveraging modern game engine platforms. We will dive into the architecture patterns as well as uncover various strengths and limitations of such approach. We will demonstrate various features of AirSim and provide quick guidance on how to get started using our open source offering. We will end our discussion with three different case studies with diverse requirements outlining our learnings in utilizing this simulation platform at Microsoft Research.
  • Home
  • Call for Papers
  • Schedule
  • Invited Speakers
  • Organizers