Food Safety in Numbers

0
1134

logo-ccaea6464d2721e73bde30306ca959a1On March 23, 2013, the civic organization Smart Chicago launched an ambitious program to enhance the city’s food safety efforts: Foodborne Chicago. Using a mix of statistical techniques and computer science, Foodborne searches Twitter for complaints of food poisoning, then follows up with users and generates formal investigations. Chicago is not alone in these efforts; San Francisco, Boston, and New York City are all in the process of implementing similar initiatives to better enforce their health codes.
Foodborne Chicago and its sibling programs are bold attempts to modernize governance, harnessing the massive streams of information on social media sites. However, while these initiatives have the potential to dramatically improve public health, they also grant additional power to the companies holding the data. This, in turn, will challenge traditional notions of privacy and property.
City health departments have historically played two roles in maintaining food safety: they coordinate with the Centers for Disease Control to manage foodborne outbreaks as they arise, and they work to prevent future outbreaks through inspections of food retail locations, including restaurants. Traditionally, these departments have relied on reporting from clinics and consumers to gather information. The Chicago Department of Public Health, for example, performs annual inspections of all restaurants, but it can also perform additional announced or unannounced investigations based on complaints it receives. Foodborne’s innovation is in its use of data to cast a wider net than traditional efforts have done, thereby reaching people who may not know that they can report food poisoning cases to the city.
Under the Hood
At the core of Foodborne is a technique called machine learning, the use of computers to comb through large datasets and discover deep patterns that human analysts would likely miss. Broadly, the computer’s task is to develop a model that can correctly place observations into categories of interest—say, identifying tweets as complaints or ordinary chatter. Researchers start by feeding the machine a training dataset containing pre-classified data. Using this information as a springboard, the machine tests a series of models, eventually converging on an equation it can use to classify future observations. The training data serve as a cheat sheet, allowing the machine to check its guesses throughout the model-building process.
Despite their sophistication, these machines do not run on their own. As Brian Richardson, director of public affairs for the Chicago Department of Public Health, explained to the HPR, Foodborne Chicago depends on human judgment in addition to computerized predictions. First, the algorithm “surfaces tweets that are related to foodborne illnesses.” Next, “a human classifier goes through those complaints that the machine classifies, […determining] what is really about food poisoning and what may be other noise.” The Foodborne team then tweets back at the likely cases, providing a link for users to file an official complaint. In short, computers deal with the massive quantity of Twitter data, and humans ensure the quality of the result. According to its website, between its launch on March 23, 2013 and November 10, 2014, the Foodborne algorithm flagged 3,594 tweets as potential food poisoning cases. Of these tweets, human coders have identified 419, roughly 12 percent, as likely cases meriting a reply on Twitter.
At first glance, an algorithm that has only 12 percent accuracy in spotting cases of food poisoning may seem highly inefficient. However, Foodborne has proven a valuable tool for the Chicago Department of Public Health. In its first nine months of operation, Foodborne initiated 133 health inspections. Approximately 40 percent of these investigations uncovered critical or severe violations of the health code—the kinds of violations that force restaurants to shut down or to remain open only under strict conditions. As Richardson noted, “that percentage is equivalent to the … percentage of violations we find based on reports we get from 311”—the phone number citizens can call to report food poisoning to their city’s municipal services. Though its program is not as expansive as Chicago’s, the City of New York has found that Yelp data can also be a useful tool, uncovering three previously undiscovered outbreaks after sifting through the restaurant reviews.
At Harvard Business School, Dr. Michael Luca has proposed an even more ambitious project: to use Yelp review data to target future restaurant investigations. In an interview with the HPR, Luca explained that in addition to following up on complaints, city health departments also perform periodic investigations of restaurants. However, due to limited personnel and resources, health departments are often forced to select restaurants at random, hoping that the risk of investigation will be enough to cause all restaurants to comply. By combining Yelp review data with previous investigation results, Luca’s team has been able to develop an algorithm that correctly identified 80 percent of restaurants with egregious health code violations in the previous year. Armed with this model, city health departments could target their investigations more finely, tailoring inspections to match Yelp complaints about restaurants.
Limits of the Machine
The main limitation of data mining approaches is that they rely on the consumer. Tweets and Yelp reviews are based entirely on the experiences of average people, who are good at noticing traits like food quality and restaurant cleanliness but will almost never notice technical mistakes, like improper food labeling, or see breaches of the health code behind kitchen doors. No matter how promising machine learning techniques are for identifying front-of-shop violations, they will tend to miss these more hidden violations.
In a statement to the HPR, the Illinois Restaurant Association, an advocacy group for restaurateurs, declared itself supportive of Chicago’s efforts to improve food safety but cautioned the Department of Public Health “to be as vigilant as possible when it comes to assessing the validity of claims submitted via this public forum.” The Restaurant Association’s reaction strikes at an unease surrounding crowdsourced solutions and the herd mentality of the Internet. It isn’t hard to imagine unscrupulous customers or managers tweeting out false complaints in the hope of targeting investigations to tarnish a restaurant’s reputation.
Foodborne’s use of human analysts and integration into the broader investigative process is one check against abuse of the system. A false tweet would have to undergo the same scrutiny as a complaint received by phone. In that respect, Foodborne’s system is no more vulnerable than traditional means of reporting. Whether complaints come from phone calls or Twitter, the same human team evaluates their legitimacy.
More broadly, the vast amount of data that machine learning algorithms process is another bulwark against abuse, particularly in the case of Yelp reviews. Writing a single false negative review on Yelp carries very little weight when placed among the pool of dozens, if not hundreds, of legitimate reviews. Furthermore, machine learning algorithms can zero in on completely unexpected trends; Luca explained that “it’s easy to guess the intuition of an inspector who is picking five words that are triggering [an investigation]. It’s not clear to me that it’s easier to game an algorithm. There are so many words that go into this that it would be a pretty complicated game.” For example, his research has shown that reviews mentioning basic ingredients tend to be more negative than reviews mentioning preparations like grilling and toasting. As long as the inner workings of a food safety algorithm stay under wraps, the complexity of its methods will be a defense against misuse.
A Shift in Power
The Illinois Restaurant Association’s concern reveals a deeper problem than simple misuse of the system, one centered on the nature of human error and machine error. By shifting humans out of the picture and trusting machines to do our analysis, we cede power to computers and equations that cannot fully understand the world. Any model is an approximation of reality at best, and the predictions machines make will inevitably be a mixture of success and failure, depending on how well reality and the model match.
However, machine learning algorithms aren’t competing against a perfect system; human analysis in the status quo comes with its own set of biases and misconceptions that can lead it astray. Just as replacing humans with machines increases the risk of mechanical error, continuing to rely on human judgment will leave us liable to human error. Society will have to decide what mix of human and machine error it prefers.
Yet machine learning does more than empower machines. An expansion of such programs would also vest more power in Yelp and Twitter, the holders of these datasets. Dr. Elaine Nsoesie, a member of the team developing Boston’s program, explained to the HPR that the project is “very dependent on [Yelp and Twitter]. If they [were] not willing to provide the data, we wouldn’t have the data to use.” To its credit, Yelp has been very cooperative with New York City and San Francisco; in addition to providing New York with a daily data feed, Luca noted that the review website sat down with his team to match its database of reviews with San Francisco’s database of restaurants. Similarly, Twitter has a general policy of providing interested groups with open access to tweets and encouraging innovative use of its data, including providing a data grant to the Boston team.
Cooperation aside, the fact remains that these companies now own and curate datasets that are increasingly valuable to the public and are becoming integrated into the government’s function. At the same time, the government does not have a right to these datasets in the current legal framework, nor are companies required to provide information to the extent that Yelp and Twitter have. As machine learning becomes a standard aspect of public life, we may see a reconceptualization of data from a belief that it is uniquely a right of companies to a view that would move to guarantee society continued access to information for the social good. Especially in the case of foodborne disease, the state could make a strong claim that it needs access to these datasets in order to carry out its duty to protect the lives and health of its citizens. In all likelihood, these arguments will never need to be made in courtrooms, and cities and companies will continue to collaborate on projects like Foodborne. Still, we are moving toward a status quo in which we expect companies like Yelp and Twitter to cooperate with the government, even in the absence of a legal requirement to do so.
Rethinking Privacy
At a time when the NSA’s use of metadata has received heavy criticism, Foodborne and its sibling programs represent a constructive use of the public’s data, one with popular support and minimal privacy concerns. However, not all extensions of machine learning will be as uncontroversial. For example, researchers at the University of Wisconsin-Madison have recently turned their attention toward cyberbullying, developing algorithms that can identify victims and perpetrators based on the content of their tweets. A second initiative is Flu Trends, an effort by Google to track flu infections by monitoring people’s search queries. Supplementing Google’s model with data from Twitter may give city health departments a better grasp of outbreaks in their city than conventional methods have.
Given access to these datasets, cities could certainly improve social outcomes. The issue lies in entrusting such information to the state, which would require us to loosen the right to information privacy. Although Chicagoans have taken well to their city’s use of public Twitter data through Foodborne, citizens may not be as receptive to governments looking through private search queries or monitoring children’s online activity outside of school, even if it is for the social good.
Our notions of privacy and property do not have much time to catch up; data-driven techniques are poised to spread rapidly across the nation. In just over a year, four of America’s largest cities have created their own prototypes for data-driven governance, and more cities are on the horizon. Luca told the HPR that his team has already reached out to several cities to develop specialized versions of the San Francisco algorithm that will allow health departments to target their inspections. The Foodborne group has been just as active, collaborating with Boston’s team to modify Chicago’s approach to work in a new city. Together, these early adopters have laid the groundwork for health departments nationwide, and their successes are the first step toward smarter and more responsive cities.
Image source: www.foodbornechicago.org