My research project
Towards better explainability of human sketch modelling
As deep learning models play an increasingly important role in many situations in our daily lives, the explainability of deep learning models becomes a key factor in determining whether users can trust these models. However, the specificity of sketch samples being both sparse and containing different strokes has resulted in most explainable models not being able to be applied to sketch models. We aim to conduct more in-depth research on the explainability of sketch-related tasks, including the analysis of shape and position information of strokes and the relationship between different strokes, and so on. It is hoped that in the end our entire research can lay the foundation for the field of sketch explainability and provide a fresh perspective for other computer vision tasks.