Deep learning approaches have caused tremendous advances in many areas of computer science. Deep learning is a branch of machine learning where the learning process is done using deep and complex architectures such as recurrent convolutional artificial neural networks. Many computer science applications have utilized deep learning such as computer vision, speech recognition, natural language processing, sentiment analysis, social network analysis, and robotics. The success of deep learning enabled the application of learning models such as reinforcement learning in which the learning process is only done by trial-and-error, solely from actions rewards or punishments. Deep reinforcement learning come to create systems that can learn how to adapt in the real world. As deep learning utilizes deep and complex architectures, the learning process usually is time and effort consuming and need huge labeled data sets. This inspired the introduction of transfer and multi-task learning approaches to better exploit the available data during training and adapt previously learned knowledge to emerging domains, tasks, or applications.
Despite the fact that many research activities is ongoing in these areas, many challenging are still unsolved. This workshop will bring together researchers working on deep learning, working on the intersection of deep learning and reinforcement learning, and/or using transfer learning to simplify deep leaning, and it will help researchers with expertise in one of these fields to learn about the others. The workshop also aims to bridge the gap between theories and practices by providing the researchers and practitioners the opportunity to share ideas and discuss and criticize current theories and results. We invite the submission of original papers on all topics related to deep learning, deep reinforcement learning, and transfer and multi-task learning, with special interest in but not limited to:
- Deep learning for innovative applications such machine translation, computational biology
- Deep Learning for Natural Language Processing
- Deep Learning for Recommender Systems
- Deep learning for computer vision
- Deep learning for systems and networks resource management
- Optimization for Deep Learning
-
Deep Reinforcement Learning
- Deep transfer learning for robots
- Determining rewards for machines
- Machine translation
- Energy consumption issues in deep reinforcement learning
- Deep reinforcement learning for game playing
- Stabilize learning dynamics in deep reinforcement learning
- Scaling up prior reinforcement learning solutions
-
Deep Transfer and multi-task learning:
- New perspectives or theories on transfer and multi-task learning
- Dataset bias and concept drift
- Transfer learning and domain adaptation
- Multi-task learning
- Feature based approaches
- Instance based approaches
- Deep architectures for transfer and multi-task learning
- Transfer across different architectures, e.g. CNN to RNN
- Transfer across different modalities, e.g. image to text
- Transfer across different tasks, e.g. object recognition and detection
- Transfer from weakly labeled or noisy data, e.g. Web data
- Datasets, benchmarks, and open-source packages
- Recourse efficient deep learning
Organization Committee
- Jaime Lloret Mauri, Universidad Politécnica de Valencia, Spain
- Zilong Ye, California State University, Los Angeles, USA
- Thar Baker, John Moors Liverpool University, UK
- Moayad Aloqaily, Carleton University, Canada
- Mohammad Alsmirat, University of Sharjah, UAE
KEYNOTE SPEAKERS
Will be announced soon.
Authors Submission Guidelines:
Submission Site:
https://easychair.org/conferences/?conf=idsta2023
Paper format
Submitted papers (.pdf format) must use the A4 IEEE Manuscript Templates for Conference Proceedings. Please remember to add Keywords to your submission.
Length
Submitted papers may be 6 to 8 pages. Up to two additional pages may be added for references. The reference pages must only contain references. Overlength papers will be rejected without review.
Originality
Papers submitted to DTL must be the original work of the authors. The may not be simultaneously under review elsewhere. Publications that have been peer-reviewed and have appeared at other conferences or workshops may not be submitted to DTL. Authors should be aware that IEEE has a strict policy with regard to plagiarism https://www.ieee.org/publications/rights/plagiarism/plagiarism-faq.html The authors' prior work must be cited appropriately.
Author list
Please ensure that you submit your papers with the full and final list of authors in the correct order. The author list registered for each submission is not allowed to be changed in any way after the paper submission deadline.
Proofreading:
Please proofread your submission carefully. It is essential that the language use in the paper is clear and correct so that it is easily understandable. (Either US English or UK English spelling conventions are acceptable.)
Publication:
All accepted papers in IDSTA2023 and the workshops colocated with it will be submitted to IEEEXplore for possible publication.
Program
The program will be announced with the IDSTA2023 program at https://idsta-conference.org/program.php
Venue
For venue and acomodoation information, please visit https://idsta-conference.org/venue.php
Registration
For registration information, please visit https://idsta-conference.org/registration.php
Camera Ready
For registration information, please visit https://idsta-conference.org/cameraready.php