Seminar "Selected Topics in Human Language Technology and Pattern Recognition"
In the Summer Term 2016 the Lehrstuhl Informatik 6 will host a
seminar entitled "Selected Topics in Human Language Technology and Pattern
Recognition."
Registration for the seminar
Registration
for the seminar is only possible online via
the central
registration page
from Thursday, Jan. 14 to Wednesday, Jan. 27, 2016. A link can
also be found on the Computer Science Department's homepage.
Prerequisites for participation in the seminar
- Bachelor students: Einführung in das wissenschaftliche Arbeiten (Proseminar)
- Master students: Bachelor degree
- Attendance of the lectures Pattern Recognition and Neural
Networks, Speech Recognition or Statistical Methods in Natural Language
Processing, or evidence of equivalent knowledge is highly recommended.
- For successful participants of the above lectures, seminar participation is guaranteed.
Seminar format and important dates
Please note the following deadlines:
- Proposals: initial proposals will be accepted up
until the start of the term's
lecture period (April 11, 2016) by email to the
seminar topic's supervisor. At this time, participants must
arrange an appointment with the relevant supervisor. Revised
proposals will be accepted up until two weeks after the start of the term.
- Article:
The deadlines were anounced via email.
- Presentation slides: PDF must be submitted at
least 1 week prior to the trial
presentation date by email to the seminar topic's
supervisor.
supervisor.
- Trial presentations: at least 2 weeks prior to the
actual presentation date; refer to the topics section.
- Seminar presentations:
Friday 10:00-14:00: 03.06, 10.06, 17.06; Monday 14:00-17:00: 06.06, 13.06, 20.06, 27.06, 04.07
- Final (possibly corrected) articles and presentation slides:
PDF must be submitted at the latest 4
weeks after the presentation date by email to the seminar topic's supervisor.
- Compulsory attendance: in order to pass, participants must attend all presentation sessions.
- Ethical Guidelines:The Computer Science Department
of RWTH Aachen University has
adopted ethical
guidelines for the authoring of academic work, such as seminar
reports. Each student has to comply with these guidelines. In this
regard, you, as a seminar attendant, have to sign
a declaration of
compliance, in which you assert that your work complies with
the guidelines, that all references used are properly cited, and
that the report was done autonomously by yourself. We ask you do
download the
guidelines
and submit
the declaration
together with your seminar report and talk to your supervisor.
You also find
a German
version of the guidelines and
a German version of the
declaration you may use as well.
Note: failure to meet deadlines, absence without permission from
compulsory sessions (presentations and preliminary meeting as
announced by email to each participating student), or dropping out
of the seminar after more than 3 weeks after the preliminary
meeting/topic distribution results in the grade
5.0/not appeared.
Topics, relevant references and participants
The general topic for this semester's seminar will be "Deep Learning
for Human Language Technology and Pattern Recognition." The follwoing
topics will be introduced at the preparatory meeting in the seminar room
at the Lehrstuhl Informatik 6. The date of the meeting has been
annouced individually to the seminar's participants as decided in the
central registration (see above).
Please note that the ordering and numbering of the topics has been
changed since the kick-off meeting. You find the seminarist covering a
topic next to the topic titles.
- Introduction to Deep Learning
-
Feedforward Deep Networks
(Bolte; Supervisor: Jan-Thorsten Peter)
Initial References:
Estimated date for presentation: 03.06
-
Regularization of Deep or Distributed Models
(Rawiel; Supervisor: Kazuki Irie)
Initial References:
Estimated date for presentation: 03.06
-
Approximate Second-Order Methods
(Erdenedash; Supervisor: Patrick Doetsch)
Initial References:
Estimated date for presentation: 03.06
-
Optimization for Training Deep Models
(Wilts; Supervisor: Pavel Golik)
Initial References:
Estimated date for presentation: 03.06
-
Convolutional Networks
(Krömker; Supervisor: Harald Hanselmann)
Initial References:
Estimated date for presentation: 06.06
-
Recurrent Neural Network and Long Term Dependencies
(Grewing; Supervisor: Parnia Bahar)
Initial References:
Estimated date for presentation: 06.06
-
Practical Methodology
(Abdelkawi; Supervisor: Markus Kitza)
Initial References:
Estimated date for presentation: 06.06
-
Monte Carlo Methods
(Jeschke; Supervisor: Pavel Golik)
Initial References:
Estimated date for presentation: 10.06
-
Autoencoders
(Mroß; Supervisor: Markus Kitza)
Initial References:
Estimated date for presentation: 10.06
-
Representation Learning
(Asselborn; Supervisor: Albert Zeyer)
Initial References:
Estimated date for presentation: 10.06
-
Structured Probabilistic Models for Deep Learning
(Brose; Supervisor: Tamer Alkhouli)
Initial References:
Estimated date for presentation: 10.06
-
Deep Generative Models
(Lukas; Supervisor: Tobias Menne)
Initial References:
Estimated date for presentation: 13.06
-
Neural Turing Machines and Related
(Kurin; Supervisor: Albert Zeyer)
Initial References:
- A. Graves, G. Wayne, I. Danihelka "Neural Turing Machines," arXiv:1410.5401, Oct. 2014
- I. Danihelka, G. Wayne, B. Uria, N. Kalchbrenner, A. Graves, "Associative Long Short-Term Memory," arXiv:1602.03032, Feb. 2016.
Estimated date for presentation: 13.06
- Deep Learning for Image Recognition
-
Similarity Learning using Convolutional Neural Networks
(Pavlitskaya; Supervisor: Harald Hanselmann)
Initial References:
Estimated date for presentation: 13.06
-
Spatial Transformer Networks
(Shen; Supervisor: Harald Hanselmann)
Initial References:
- M. Jaderberg, K. Simonyan, A. Zisserman and K. Kavukcuoglu: "Spatial transformer networks," Proc. Advances in Neural Information Processing Systems (NIPS), Montreal, Canada, pp. 2008-2016, Dec. 2015.
Estimated date for presentation: 17.06
- Deep Learning for Language Modeling
-
Neural Network-based Language Models
(Ayhan; Supervisor: Parnia Bahar)
Initial References:
- Y. Bengio, R. Ducharme, P. Vincent, C. Jauvain, "A Neural Probabilistic Language Model," Journal of Machine Learning Research, Vol. 3, pp. 1137-1155, Feb. 2003.
- T. Mikolov, et. al, "Recurrent Neural Network based Language Model," Proc. Interspeech, Makuhari, Japan, Vol. 3, pp. 1045-1048, Sep. 2010.
Estimated date for presentation: 17.06
-
Convolutional Neural Networks for Texts (and for Neural Language Modeling)
(Hinrichs; Supervisor: Kazuki Irie)
Initial References:
- X. Zhang, J. Zhao, and Y. LeCun, "Character-level Convolutional Networks for Text Classification," Proc. Advances in Neural Information Processing Systems (NIPS), pp. 649-657, Montréal, Canada, Dec. 2015.
- Y. Kim, Y. Jernite, D. Sontag and A. M. Rush, "Character-aware Neural Language Models," Proc. AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, Feb. 2016.
Estimated date for presentation: 17.06
-
Character-based Embeddings of Words with Recurrent Nets (for Neural Language Modeling)
(Grätzer; Supervisor: Kazuki Irie)
Initial References:
- W. Ling, T. Luís, L. M. Ramón, F. Astudillo, S. Amir, C. Dyer, A. W. Black, I. Trancoso, "Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation," Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1520-1530, Lisbon, Portugal, Sep. 2015.
- M. Faruqui, Y. Tsvetkov, G. Neubig, C. Dyer, "Morphological Inflection Generation Using Character Sequence to Sequence Learning," arXiv:1512.06110, under review at Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), San Diego, CA, USA, Jun. 2016.
Estimated date for presentation: 20.06
- Deep Learning for Automatic Speech and Handwriting Recognition
-
End-to-end Recurrent Neural Network Systems
(Mohanty; Supervisor: Patrick Doetsch)
Initial References:
- William Chan, Navdeep Jaitly, Quoc V. Le and Oriol Vinyals. "Listen, Attend and Spell," arXiv:1508.01211, Aug. 2015.
- Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel and Yoshua Bengio. "End-to-End Attention-based Large Vocabulary Speech Recognition," arXiv:1508.04395, Aug. 2015.
Estimated date for presentation: 20.06
-
Decoding in Recurrent Neural Networks
(Moothiringote; Supervisor: Patrick Doetsch)
Initial References:
- Alex Graves and Santiago Fernández and Faustino Gomez. "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks," International Conference on Machine Learning (ICML), pp. 369-376, New York City, NY, Jun. 2016.
Estimated date for presentation: 20.06
-
DNN Adaptation
(Machado Duarte; Supervisor: Pavel Golik)
Initial References:
- G. Saon, H. Soltau, D. Nahamoo, M. Picheny, "Speaker adaptation of neural network acoustic models using i-vectors," IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 55-59, Olomouc, Czech Republic, Dec. 2013.
Estimated date for presentation: 27.06
- Deep Learning for Speech Signal Processing
-
Bottleneck Features
(Macherey; Supervisor: Markus Kitza)
Initial References:
- Gehring, J.Miao, Y., Metze, F., Waibel, A.: "Extracting deep bottleneck features using stacked auto-encoders," Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Vancouver, BC/Canada, pp. 3377-3381, May 2013.
- Himawan, I., Motlicek, P., Imseng, D., Potard, B.: "Learning feature mapping using deep neural network bottleneck features for distant large vocabulary speech recognition," Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, South Brisbane, QLD/Australia, pp. 4540 - 4544, April 2015.
- Diyuan Liu, Si Wei, Wu Guo, Yebo Bao, Shifu Xiong, Lirong Dai: "Lattice based optimization of bottleneck feature extractor with linear transformation," Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Florence, Italy, pp. 5617 - 5621, May 2014.
Estimated date for presentation: 27.06
-
Neural Network Based Speech Enhancement
(Krüger; Supervisor: Tobias Menne)
Initial References:
- Y. Xu, J. Du, Z. Huang, L.-R. Dai, C.-H. Lee: Multi-objective Learning and Mask-based Post-processing for Deep Neural Network based Speech Enhancement. Proc. Interspeech 2015, Dresden, Germany, pp. 1508-1512, Sep. 2015.
- F. Weninger, H. Erdogan, S. Watanabe, E. Vincent, J. Le Roux, J. R. Hershey B. Schuller: "Speech enhancement with LSTM recurrent neural networks and its application to noise-robust ASR" Proc. 12th Int. Conf. on Latent Variable Analysis and Signal Separation (LVA/ICA), Liberec, Czech Republic, pp. 91-99, Aug. 2015.
Estimated date for presentation: 27.06
-
Using Deep Learning to Support Conventional Signal Processing
(Ryndin; Supervisor: Tobias Menne)
Initial References:
- J. Heymann, L. Drude, A. Chinaev, R. Haeb-Umbach: BLSTM Supported GEV Beamformer Front-End for the 3rd Chime Challenge. Proc. Automatic Speech Recognition and Understanding Workshop (ASRU), Scottsdale, AZ, pp. 444-451, Dec. 2015.
Estimated date for presentation: 04.07
- Deep Learning for Machine Translation
-
Alignment Structures for Attentionl Neural Translation Models
(Petrushkov; Supervisor: Tamer Alkhouli)
Initial References:
Estimated date for presentation: 04.07
Guidelines for the article and presentation
The roughly 20-page article together with the slides (between 20 &
30) for the presentation should be prepared in LaTeX format.
Presentations will consist of 30 to 45 minutes presentation time & 15
minutes discussion time. Document templates for both the article and
the presentation slides are provided below along with links to LaTeX
documentation available online. The article and
the slides should be prepared in LaTeX format and submitted
electronically in pdf format. Other formats will not be accepted.
- Online LaTeX-Documentation:
- Guidelines for articles and presentation slides:
General:
- The aim of the seminar for the participants is to learn the
following:
- to tackle a topic and to expand knowledge
- to critically analyze the literature
- to hold a presentation
- Take notice of references
to other topics in the seminar and discuss topics with one
another!
- Take care to stay within your
own topic. To this end participants should be aware of the other
topics in the seminar. If applicable, cross-reference
other articles and presentations.
Specific:
- Important: As part of the introduction, a slide should
outline the most important literature used for the presentation. In
addition, the presentation should clearly indicate which literature the particular
elements of the presentation refer to.
- Take notice of references
to other topics in the seminar and discuss topics with one
another!
- Participants are expected to seek out additional literature on their
topic. Assistance with the literature search is available at the
facultys library. Access to literature is naturally also available at
the Lehrstuhl Informatik 6 library.
- Notation/Mathematical
Formulas: consistent, correct notation
is essential. When necessary, differing notation from various
literature sources is to be modified or standardized in order to be
clear and consistent. The
lectures held by the Lehrstuhl Informatik 6 should provide a
guide as to what appropriate notation should look like.
- Tables
must have titles (appearing above the table).
- Figures
must have captions (appearing below the figure).
- The use of English is recommended and mandatory for the presentation slides. Nevertheless the article and oral presentation might be German.
- In the case that no adequate translation of an
English technical term is available, the term should be used unchanged.
- Completeness:
acknowledge all literature and
sources.
- Referencing must conform to the standard
described in the article template.
- Examples should be used to illustrate points.
- Examples should be as complex as necessary but as simple
as possible.
- Slides should be used
as presentation aids and not to replace the role of the presenter;
specifically, slides should:
- illustrate important points and relationships;
- remind the audience (and the presenter) of important aspects
and considerations;
- give the audience an overview
of the presentation.
- Slides should not contain chunks of text or complicated
sentences; rather they should consist of succinct words and terms.
- Use illustrations
where appropriate - a picture says a thousand words!
- Abbreviations should be defined at the first usage in the manner
demonstrated in the following example: "[...] at the
Rheinisch-Westfälischen Technischen Hochschule (RWTH) there are
[...]."
- Take care to stay within your
own topic. To this end participants should be aware of the other topics in the
seminar. If applicable, cross-reference
other articles and presentations.
- Usage of fonts, typefaces and colors in presentation slides must
be consistent and appropriate. Such means should serve to clarify
points or relationships, not be applied needlessly or at random.
- Care should be taken when selecting fonts for presentation
slides (also within diagrams) to ensure legibility on a projector even
for those seated far from the screen.
Contact
Inquiries should be directed to the respective supervisors or to:
Julian Schamper
RWTH Aachen University
Lehrstuhl Informatik 6
Ahornstr. 55
52074 Aachen
Room 6129
Tel: 0241 80 21615
E-Mail: schamper@cs.rwth-aachen.de