Wegmans Food Markets has found itself at the center of a growing privacy debate following the discovery of new biometric surveillance measures at its New York City locations.
While the company maintains that the technology is a necessary security tool used only to identify individuals “flagged for misconduct,” shoppers at its Manhattan and Brooklyn stores are being met with legally mandated signage that tells a different story. According to the signs, which are required under NYC's 2021 biometric disclosure law, the grocer “collects, retains, converts, stores or shares” data that could include not just facial recognition, but also “eye scans” and “voiceprints.”
In an official public statement, Wegmans' public relations manager, Tracy Van Auker defended the move, stating the cameras are deployed only in a small fraction of stores facing elevated risk. Van Auker clarifies that the system solely uses facial recognition as an investigative lead to spot people previously identified by asset protection or law enforcement.
Despite the broad language on their physical signs, the company's corporate stance is firm: “We do not collect other biometric data such as retinal scans or voiceprints.” Wegmans further stated that it does not share this data with third parties and retains images only as long as necessary for security purposes, though it declined to specify a retention period for security reasons.
When contacted via email for further clarification on the discrepancy between their signs and their public denials, Mandee Puleo, senior public relations coordinator for Wegmans, reiterated the company's reliance on facial recognition alone.
“While the signage lists several types of biometric technology, we only use facial recognition,” Puleo wrote, adding that the broad language on the signs is intended to “comply with local requirements.” She noted the store remains “unable to provide specific details for security reasons” regarding the system's exact operations.

However, privacy advocates and technical experts argue that this “notice-only” approach is insufficient and potentially misleading.
Ben Winters, director of AI and privacy for the Consumer Federation of America and chair of the ACM USTPC Privacy Subcommittee, warns that over-inclusive or inconsistent notices diminish public trust. In a call with The Packer, Winters noted the ACM has called for an immediate suspension of such technology since 2020 because it is not yet “reliably unbiased enough” to be used when people's “lives, livelihoods — and certainly liberty — are at stake.”
Winters says simply slapping a sign on a door does not constitute meaningful consent.
“You can't just slap [a sign] there and say, ‘Hey, we're doing biometric identification today,' and then make someone decide whether they should go in and get that banana they need,” Winters says.
He argues true transparency requires rigorous testing and accountability measures integrated throughout the process, rather than just “at the point of use.”
“There needs to be notice well in advance,” Winters says. “There needs to be a description and ability for people to investigate what data was used to make the algorithm. How is it being implemented, what sort of data is being collected, and how is it being used?
“Transparency is key,” he says. “It's never sufficient, but it's absolutely necessary.”








