Overview:
Learning and reasoning are two fundamental intelligence abilities. While machine learning has witnessed record-breaking successes in the last decade with the rapidly developed deep learning techniques, neural network models are nevertheless perceived as black-boxes. There has been a compelling need to interpret and visualize the learned representations and decisions made by NN models, especially for sensitive applications such as medical diagnosis or autonomous driving in which rare mistakes can be costly or fatal. Moreover, the ability to assemble trainable networks and thus combine previously acquired knowledge plays an extremely import role in constructing reasoning systems.

Topics:

  • Visualizing and distillation of deep learning models
  • Analysis and comparison of methods to interpret & visualize deep learning models
  • Machine reasoning systems via first-order logic, fuzzy logic, probabilistic reasoning, causal reasoning, spatial reasoning and social reasoning
  • Industrial applications in medical diagnosis, autonomous driving, etc.
  • Safe AI and AI ethics

Paper Submission:
Submitted papers must be formatted according to IJCAI guidelines and submitted electronically through the IReDLiA paper submission site.

Important Dates:

  • Submission Deadline: 01 May 2018
  • Notifications: 22 May 2018
  • Camera Ready: 29 May 2018
  • Workshop Dates: 14-15 July 2018

Venue and Registration:
The workshop will take place at Stockholmsmässan, Stockholm, Sweden. Please consult the main ICML website or IJCAI-ECAI website for details on registration.

Organizers and Contact Information:
Lixin Fan
Nokia Technologies
lixin.fan at nokia.com
Chee Seng Chan
University of Malaya, Malaysia
cs.chan at um.edu.my
Feiyue Wang
Chinese Academy of Sciences, PR China
feiyue.wang at ia.ac.cn
Xuefeng Liang
Xidian University, PR China
xliang at xidian.edu.cn