Will Tao on CBSA’s AI Border Screening Tool: Risks for Travellers

Share on facebook
Share on twitter
Share on linkedin

Photo credit to Toronto Star

Heron Law Offices’ Principal Lawyer, Will Tao, was recently quoted in a Toronto Star article by Senior Immigration Reporter Nicholas Keung, discussing the Canada Border Services Agency’s planned “Traveller Compliance Indicator” (TCI). Please note: this news story is accessible only to Toronto Star subscribers.

Our office’s Access to Information and Privacy (ATIP) request led to the disclosure of this planned TCI system—an automated decision-making (ADM) tool currently undergoing review by the Treasury Board Secretariat. While the system was originally slated for rollout by the end of 2025, it now appears that full deployment is expected by 2027. Hopefully, having uncovered this development early, there is still time for meaningful scrutiny, feedback, and consultation before implementation.

The TCI is designed to assign compliance scores to everyone entering Canada and to flag “higher risk” travellers for further inspection. While the CBSA has framed the tool as a way to enhance efficiency at ports of entry, Will has raised several important concerns.

As Will explained:

“If you’ve historically been very critical over a certain group, then that will be in the data and we’ll transfer that into the tool. You look for the problems and you find problems where you’re looking, right?”

Will also noted that unlike Canadian citizens—who may only face inconvenience if flagged—non-citizens could face far more serious consequences:

“If you are a Canadian citizen, and let’s say you’ve had like a money laundering situation where you brought in too much currency, you’ll be inconvenienced by having your bags flipped and asked a lot of questions. But legally speaking you’ll be allowed to come back into Canada. But for immigrants who don’t have a right to enter Canada, the secondary examination could be the trigger point of inadmissibility.”

He expressed concern that the tool, like other AI systems, could reinforce biases linked to skin colour, citizenship, or background—further marginalizing certain groups.

Finally, Will highlighted potential legal challenges if the system provides only generic or “boilerplate” reasons for flagging travellers:

“…if those reasons are actually an officer rationalizing ex post to obscure what the machine’s real concerns are.”

At Heron Law Offices, we remain committed to monitoring the intersection of AI, administrative decision-making, and procedural fairness. With Canada moving toward greater reliance on predictive analytics at ports of entry, it is critical to ensure transparency, accountability, and meaningful recourse for those who may be unfairly targeted.

Get Started with a Consultation

Translate »