Thank you very much, Madam Chair.
Thank you to all of the witnesses for their presentations.
I'd like to follow up with Mr. Camara.
On the issue around Chinook, your recommendation is for the government to halt the use of it. What we've heard with Chinook is that there are potentially inherent biases embedded in this artificial intelligence system. Some of those biases are triggered by risk words that are identified and red-flagged [Technical difficulty—Editor] the large majority of the applications are rejected.
With that in mind, would you agree that with any artificial intelligence systems the government must hold extensive consultations with stakeholders, and that there must be an independent assessment of these tools?