At the Supreme Court for Facebook, Inc. v. Privacy Commissioner of Canada
What this case reveals about the limits of consent in privacy law
As president of the uOttawa Digital Policy Association, one of my goals is to bring students into the spaces where digital policy is actually being shaped. To that end, I recently organized a group to attend the hearing in Facebook, Inc. v. Privacy Commissioner of Canada.
Listening to arguments about Graph APIs and privacy architecture unfold in Canada’s highest court, I was struck by how old some of our legal assumptions now seem relative to the systems they are being asked to govern.
I have been thinking about the case ever since, because it raises a question that goes beyond a single platform: what happens to privacy law when the underlying model of individual consent no longer maps onto the way information is actually shared?
Privacy law often assumes a relatively simple bilateral relationship, an individual deciding whether to share information with an organization. But many digital systems are networked. One person’s decision can affect another person’s data, visibility, or exposure.
The core question before the Court is whether a consent-based model can still capture privacy harms that arise in environments where information is shared through relationships and platform architecture rather than direct interaction.
While the Court is formally tasked with interpreting the requirements for consent and safeguards under the Personal Information Protection and Electronic Documents Act (PIPEDA), the case reveals yet again that these legal categories are under immense strain.
The case is significant not only because of its historical facts, but because it reveals that our primary tool for privacy protection, individual consent, is deeply ill-suited to managing a platform ecosystem where data harms are networked, relational, and downstream.
Relational privacy in a networked world
The most compelling feature of the case is that it does not concern only the data of the user who installed a third-party app. Facebook estimated that only 272 users in Canada installed “This Is Your Digital Life,” yet the personal information of potentially up to 621,889 other Facebook users in Canada was disclosed to the app because they were friends of those users. The app later became widely known for secretly harvesting data used in the Cambridge Analytica scandal.
This exposes a structural limitation in the traditional consent model. Consent is usually imagined as an individual transaction, but on a social platform, privacy is relational. One user’s choice to click “allow” creates exposure for others who were never part of that interaction.
This relational problem makes the logic of standardized digital contracts especially unstable. In these mass services, users have little or no practical bargaining power. When a user is presented with a 9,100-word data policy on a take-it-or-leave-it basis, the legal and moral weight of that click is weakened considerably.
That click is then used to justify the disclosure of a friend’s personal information, someone who never saw the policy, never had the chance to negotiate, and may not have even known the app existed.
This problem is not unique to social media. It is likely to recur in any networked or ambient technology where one person’s interaction with a system affects the data of others who were never meaningfully part of the consent process.
Consent as a standard
This tension explains why the framing of meaningful consent matters so much: the legal standard determines whether privacy law evaluates consent by reconstructing individual understanding after the fact, or by asking whether the disclosure framework was acceptable in principle.
If we treat consent as a purely subjective inquiry, we are forced to ask what a specific user actually understood in a specific moment, an empirical question that is often impossible to answer at scale.
However, the standard under PIPEDA is an objective, “reasonable person” standard. This suggests that privacy law is not merely descriptive, but partly normative. It is not just a survey of what people happen to understand, but a statement of what privacy should look like in a free and democratic society.
By moving away from subjective understanding, the law creates room to assess whether a platform’s design and disclosures were legally sufficient in principle, regardless of whether a user actually waded through the fine print.
Rights over needs
This shift toward a more objective, systemic view is supported by the text of PIPEDA itself. The purpose of the Act does not describe privacy and commercial activity as two perfectly symmetrical interests. It speaks of the “right” of privacy of individuals and the “need” of organizations to collect, use, or disclose personal information for appropriate purposes.
This distinction suggests that the law is not simply meant to facilitate whatever data practices organizations find commercially useful, but to permit them only within a framework that begins from the recognition of privacy as a right.
One exchange at the hearing made this especially clear. When Facebook’s argument appeared to prioritize its business model over privacy protections, suggesting that organizational needs should outweigh rights, the justices pushed back. They took issue with the notion that the legislation’s purpose could be interpreted as prioritizing the operation of the platform over its users’ privacy rights.
The architecture of risk
The second major issue, safeguards, forces us to confront where a platform’s responsibility actually ends. Facebook’s position is that once information was disclosed to app developers with user consent, responsibility for downstream misuse lay with those third parties. The Commissioner’s position is that Facebook had a continuing obligation to maintain reasonable safeguards.
A platform does not merely transfer data. It creates, structures, and profits from the environment in which that data exists. If a platform designs a permissions structure that bad actors can exploit, or fails to act on red flags such as an app requesting clearly unnecessary data, it has effectively created an environment of risk. This is especially true where the platform’s own architecture determines what categories of data can be accessed, by whom, and under what default conditions.
This shows why the bilateral model is inadequate. A platform’s role is not exhausted once a user clicks “allow.” The architect of the environment remains responsible for the integrity of the data ecosystem it enables.
Toward a new theory of platform responsibility
Because new privacy legislation is expected to be tabled in Parliament this spring, it is also worth asking what lessons from this case should shape reform efforts. New legislation will likely draw significantly from the previous Parliament’s proposed Consumer Privacy Protection Act (CPPA), which reflected a move toward a more modern language of accountability.
The CPPA placed more emphasis on organizational accountability and privacy management programs, rather than relying so heavily on individual consent as the primary mechanism of privacy protection.
Yet even a modernized framework may not fully solve the hardest problem this case raises. Stronger accountability rules may clarify what organizations must do to govern the data environments they create, but they still do not fully answer the deeper question posed by this case: how should the law treat information that is inherently relational?
Watching the hearing, I realized that Canadian privacy law is in the middle of a conceptual shift. The older model, centered on notice, consent, and bilateral relationships, is buckling under the weight of networked technology. A newer model, centered on governance, privacy-by-design, and organizational accountability, is emerging to take its place.
This case is a landmark not just because of the size of the company and the scandal involved, but because it asks whether our legal framework can adapt to a world where data harms are networked and relational.
The answer will help determine whether privacy law can remain meaningful in a world increasingly shaped by digital systems that are networked by design.

