Type of course: Seminar
Study programs:
Master Computer Science
Master Data Science
Master Software Systems Engineering
Offering chair: Machine
Learning and Reasoning (i6), RWTH
Description: Students
will present recent papers in machine learning
compiled by the professor. Topics include deep and
reinforcement learning, transformer and GNN
architectures, planning, LLMs, etc.
Recommended prior knowledge:
Bachelor degree in CS or Equivalent. Basic AI and
ML courses.
Kickoff: 05/10/2023, 14h-16h,
Room 228, Theaterstr. 35, 2nd floor
Format,
organization, evaluation: TBA
Website: https://www-i6.informatik.rwth-aachen.de/~hector.geffner/Seminar-W2023.html
Introduction -- Slides
Tentative,
partial list of papers
GNNs, Logic,
Transformers
Natural
Language is All a Graph Needs Ruosong Ye,
Caiqi Zhang, Runhui Wang, Shuyuan Xu, Yongfeng
Zhang, 2023
One
Model, Any CSP: Graph Neural Networks as
Fast Global Search Heuristics for
Constraint Satisfaction Jan
Tönshoff, Berke Kisin, Jakob Lindner,
Martin Grohe, 2022
On
the Correspondence Between Monotonic
Max-Sum GNNs and Datalog David Tena
Cucala, Bernardo Cuenca Grau, Boris Motik,
Egor V. Kostylev, 6/2023
Towards
Arbitrarily Expressive GNNs in O(n^2)
Space by Rethinking Folklore
Weisfeiler-Lehman Jiarui Feng,
Lecheng Kong, Hao Liu, Dacheng Tao, Fuhai
Li, Muhan Zhang, Yixin Chen, 6/2023
On
the Paradox of Learning to Reason from
Data Honghua Zhang, Liunian Harold
Li, Tao Meng, Kai-Wei Chang, Guy Van den
Broeck, 5/2022
Learning
Transformer Programs Dan Friedman,
Alexander Wettig, Danqi Chen, 6/2023
Boolformer:
Symbolic Regression of Logic Functions
with Transformers Stéphane d'Ascoli,
Samy Bengio, Josh Susskind, Emmanuel Abbé,
9/2023
LLMS,
Transformers
Large
Language Models as Commonsense Knowledge
for Large-Scale Task Planning
Zirui Zhao, Wee Sun Lee, David Hsu, 5/2023
Pure
Transformers are Powerful Graph Learners
Jinwoo Kim, Tien Dat Nguyen, Seonwoo Min,
Sungjun Cho, Moontae Lee, Honglak Lee,
Seunghoon Hong, 7/2022
TabPFN:
A Transformer That Solves Small Tabular
Classification Problems in a Second
Noah Hollmann, Samuel Müller, Katharina
Eggensperger, Frank Hutter, 10/2022
Thought
Cloning: Learning to Think while Acting
by Imitating Human Thinking
Shengran Hu, Jeff Clune, 6/2023
Large
Language Models as General Pattern
Machines
Suvir Mirchandani, Fei Xia, Pete Florence,
Brian Ichter, Danny Driess, Montserrat
Gonzalez Arenas, Kanishka Rao, Dorsa
Sadigh, Andy Zeng, 7/2023
RL
Contrastive
learning as goal-conditioned
reinforcement learning
Benjamin Eysenbach, Tianjun Zhang, Ruslan
Salakhutdinov, Sergey Levine, 2022
Behavior
From the Void: Unsupervised Active
Pre-Training
Hao Liu, Pieter Abbeel, 10/2021
Deep
Hierarchical Planning from Pixels
Danijar Hafner, Kuang-Huei Lee, Ian
Fischer, Pieter Abbeel, 2021
Discovering
and Achieving Goals via World Models
Russell Mendonca, Oleh Rybkin, Kostas
Daniilidis, Danijar Hafner, Deepak Pathak,
2021
Decision
Transformer: Reinforcement Learning via
Sequence Modeling
Lili Chen, Kevin Lu, Aravind Rajeswaran,
Kimin Lee, Aditya Grover, Michael Laskin,
Pieter Abbeel, Aravind Srinivas, Igor
Mordatch, 6/2021
Pretraining
for Language-Conditioned Imitation with
Transformers
Aaron (Louie) Putterman, Kevin Lu, Igor
Mordatch, Pieter Abbeel, 2021
Investigating
the Properties of Neural Network
Representations in Reinforcement
Learning
Han Wang, Erfan Miahi, Martha White,
Marlos C. Machado, Zaheer Abbas, Raksha
Kumaraswamy, Vincent Liu, Adam White,
1/2023
Voyager:
An Open-Ended Embodied Agent with Large
Language Models
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay
Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi
Fan, Anima Anandkumar, 5/2023
Planning/Subgoals
Skills
Reward
Machines: Exploiting Reward Function
Structure in Reinforcement Learning
Rodrigo Toro Icarte, Toryn Q. Klassen,
Richard Valenzano, Sheila A. McIlraith,
2022
Generalized
Planning in PDDL Domains with Pretrained
Large Language Models
Tom Silver, Soham Dan, Kavitha Srinivas,
Joshua B. Tenenbaum, Leslie Pack
Kaelbling, Michael Katz, 2023
Predicate
Invention for Bilevel Planning
Tom Silver, Rohan Chitnis, Nishanth Kumar,
Willie McClinton, Tomas Lozano-Perez,
Leslie Pack Kaelbling, Joshua Tenenbaum,
2022
Compositional
Foundation Models for Hierarchical
Planning
Anurag Ajay, Seungwook Han, Yilun Du,
Shuang Li, Abhi Gupta, Tommi Jaakkola,
Josh Tenenbaum, Leslie Kaelbling, Akash
Srivastava, Pulkit Agrawal, 9/2023
Improving
Intrinsic Exploration with Language
Abstractions
Jesse Mu, Victor Zhong, Roberta Raileanu,
Minqi Jiang, Noah Goodman, Tim
Rocktäschel, Edward Grefenstette, 2022
Exploration
via Elliptical Episodic Bonuses
Mikael Henaff, Roberta Raileanu, Minqi
Jiang, Tim Rocktäschel, 2022
Out-of-Distribution
Generalization by Neural-Symbolic Joint
Training
Anji Liu, Hongming Xu, Guy Van den Broeck,
Yitao Liang, 2023
Learning
to Model the World with Language
Jessy Lin, Yuqing Du, Olivia Watkins,
Danijar Hafner, Pieter Abbeel, Dan Klein,
Anca Dragan, 7/2023
Leveraging
Pre-trained Large Language Models to
Construct and Utilize World Models for
Model-based Task Planning
Lin Guan, Karthik Valmeekam, Sarath
Sreedharan, Subbarao Kambhampati, 5/2023
Interpretable
and Explainable Logical Policies via
Neurally Guided Symbolic Abstraction
Quentin Delfosse, Hikaru Shindo, Devendra
Dhami, Kristian Kersting, 6/2023
Reinforcement
Learning with Option Machine
Floris den Hengst, Vincent Francois-Lavet,
Mark Hoogendoorn, Frank van Harmelen, 2022
Learning
Rational Subgoals from Demonstrations
and Instructions
Zhezheng Luo, Jiayuan Mao, Jiajun Wu,
Tomás Lozano-Pérez, Joshua B. Tenenbaum,
Leslie Pack Kaelbling, 2023
Robotics
ALFRED:
A Benchmark for Interpreting Grounded
Instructions for Everyday Tasks
Mohit Shridhar, Jesse Thomason, Daniel
Gordon, Yonatan Bisk, Winson Han, Roozbeh
Mottaghi, Luke Zettlemoyer, Dieter Fox,
3/2020
Do
As I Can, Not As I Say: Grounding
Language in Robotic Affordances
Michael Ahn, Anthony Brohan, .. Andy Zeng,
2022
SORNet:
Spatial Object-Centric Representations
for Sequential Manipulation
Wentao Yuan, Chris Paxton, Karthik
Desingh, Dieter Fox, 9/2022
Language-Driven
Representation Learning for Robotics
Siddharth Karamcheti, Suraj Nair, Annie S.
Chen, Thomas Kollar, Chelsea Finn, Dorsa
Sadigh, Percy Liang, 2/2023
Grounding
Predicates through Actions
Toki Migimatsu, Jeannette Bohg, 2022
Transferable
Task Execution from Pixels through Deep
Planning Domain Learning
Kei Kase, Chris Paxton, Hammad Mazhar,
Tetsuya Ogata, Dieter Fox, 2020
Perceiver-Actor:
A Multi-Task Transformer for Robotic
Manipulation
Mohit Shridhar, Lucas Manuelli, Dieter
Fox, 2022
Text2Motion:
From Natural Language Instructions to
Feasible Plans
Kevin Lin, Christopher Agia, Toki
Migimatsu, Marco Pavone, Jeannette Bohg,
6/2023
Vision
An
Image is Worth 16x16 Words: Transformers
for Image Recognition at Scale
Alexey Dosovitskiy, Lucas Beyer, Alexander
Kolesnikov, Dirk Weissenborn, Xiaohua
Zhai, Thomas Unterthiner, Mostafa
Dehghani, Matthias Minderer, Georg
Heigold, Sylvain Gelly, Jakob Uszkoreit,
Neil Houlsby, 6/2021
Taming
Transformers for High-Resolution Image
Synthesis
Patrick Esser, Robin Rombach, Björn Ommer,
6/2021
Neural
Discrete Representation Learning
Aaron van den Oord, Oriol Vinyals, Koray
Kavukcuoglu, 2018
ARC
Benchmark
The
ConceptARC Benchmark: Evaluating
Understanding and Generalization in the
ARC Domain
Arseny Moskvichev, Victor Vikram Odouard,
Melanie Mitchell, 5/2023
Communicating
Natural Programs to Humans and Machines
Samuel Acquaviva, Yewen Pu, Marta Kryven,
Theodoros Sechopoulos, Catherine Wong,
Gabrielle E Ecanow, Maxwell Nye, Michael
Henry Tessler, Joshua B. Tenenbaum, 2022
Graphs,
Constraints, and Search for the
Abstraction and Reasoning Corpus
Yudong Xu, Elias B. Khalil, Scott Sanner,
2022
LLMs
and the Abstraction and Reasoning
Corpus: Successes, Failures, and the
Importance of Object-based
Representations
Yudong Xu, Wenhao Li, Pashootan Vaezipoor,
Scott Sanner, Elias B. Khalil, 2023