The Conformance Checking Challenge: CCC 2020

ICPM challenge and study

Presented by Luciano García-Bañuelos

The Conformance Checking Challenge (CCC, co-located with with ICPM) is the contest for researchers and practitioners who want to test themselves on process conformance, highlighting commonalities and discrepancies between the processes described in the provided models and event logs. We speak about its conception and organization with a renown conformance checking expert and founder of the CCC.

Tell us a bit about yourself and your research institute!

My name is Luciano García-Bañuelos and I work at Tecnologico de Monterrey in Mexico. Just to clarify, let me tell you that I am not located in Monterrey but in a campus located in Puebla, a lovely city in the center of Mexico.

I am currently part of the research group on machine learning and computational methods. That also means that currently there is not a business process management nor process mining group in Tecnologico de Monterrey. I myself spent almost 10 years working with Prof. Marlon Dumas in Estonia, before moving back to my home country. Therefore, my current challenge is to motivate students and colleagues to join our vibrant research community with the aim of organizing not only the conformance checking challenge but the full ICPM conference in Mexico.

When and how did you first come up with the idea to set up the challenge?

Being honest, I have to say that organizing the challenge was probably not something I thought about before. However, it was a really nice surprise to receive an email from Prof. Wil van der Aalst in September 2018, inviting me to be part of the team that organized the first the conformance checking challenge. The email arrived while I was in Sydney, after the BPM conference, thinking about what I could do to get back to Tartu: my flight was cancelled because of Typhoon Mangkhut. The other members of the team were Abel Armas (now at the University of Melbourne) and Jorge Munoz-Gama (now at the Pontificia Universidad de Chile). I immediately answer to Wil with a yes. The problems came only afterwards. Indeed, can you imagine setting up a call to organize the challenge with two other guys living literally at the other side of the planet? Things became easier once I moved back to Mexico in December 2018.
And, well, this year is not necessarily simpler. The team this year includes Abel Armas and Artem Polyvyanyy living in Melbourne (Australia), Gert Janssenwillen living in Hasselt (Belgium) and myself living in Puebla (Mexico). Luckily, Gert has found the strength to meet with us around midnight.

What are the main reasons that motivate you in this endeavour?

Well, I have been longly interested in using behavior representations as instruments to reason about model structuredness and model comparison, among other aspects. At some point in time, together with Marcello la Rosa’s team, we explored the use of such behavioral representations to develop what we called behavioral alignments and then to produce textual as well as visual conformance checking feedback/diagnostics.

Therefore, I felt the invitation was not only interesting but also an opportunity to enrich the goals of the challenge. I mean, from the first edition of the event, we felt that the challenge should not only cover the algorithmic aspects of computing alignments but also the need for producing diagnostics that are meaningful (interpretable) to domain experts with little knowledge in process modeling/mining.

What are the best and worst moments in gathering and releasing data?

It is always a nightmare to find a good dataset for the challenge. We are not as lucky as the team organizing the Process Discovery Contest! They can generate artificial logs. In contrast, for the conformance checking we not only need an event log but also the underlying process models. And, even if we decided to generate artificial logs, we would like an expert from a real-world domain to judge the usefulness of the diagnostic generated by the conformance checking tools. In the first edition of the challenge, we went for an event log and a model taken from a medical training process. But we did it! We managed to put together a nice set of event logs and models.

Tell us about CCC 2020!

This year, we decided to take event logs from a real-world dataset used in a former BPI challenge. We then used state-of-the-art discovery algorithms to derive the process models. Finally, we took small samples from each one of the input logs and shuffled them together to produce new, noise logs. We would like to see if the contenders are able to unshuffle them. That was it!

But the problem was not only to organize a dataset. As I said before, the main problem is to find a decent time to discuss (thanks Gert for volunteering and staying awake late)!

How do you see the future of CCC?

Well, I see new approaches to conformance checking proposed by the community. I am excited about the works on stochastic conformance checking, on approximate alignments and also in integrating explainable machine learning into the equation. I still believe we have a long road ahead. I still dream of techniques that produce understandable feedback to people in other domains and also of seeing such techniques being integrated in commercial tools.