Cedar Milazzo
Problem Addressed Educating for Democracy
Solution Created a technology platform using artificial intelligence (AI) that distinguishes trustworthy web pages from those that lack credibility.
Location San Jose, CA
Impact Global

What he did

Cedar Milazzo started a company to combat the spread of false, misleading, and dangerous information on the Internet. He and his team created a technology platform that can identify whether a web page contains credible or untrustworthy information. His challenge was to find an application for the technology that would sustain the business.

His story

After two decades working amidst the entrepreneurial culture of Silicon Valley, Milazzo grew increasingly concerned by the online spread of bad information in healthcare and politics, including falsehoods cited in opposition to vaccine use and Russian disinformation campaigns during the 2016 presidential election.

Then, in the spring of 2018, Milazzo read about murders in India that were motivated by a false Internet rumor. The rumor circulating on the social media platform WhatsApp claimed that bands of men were roaming the countryside, kidnapping young children, and harvesting their organs. Villagers, many of them new to the online world, took the rumors seriously and grew suspicious of any strangers visiting their communities. At least 12 people – all innocent victims – were brutally murdered because of the fear fostered by the spread of bad information.

That news pushed Milazzo to act. Within weeks, using very basic criteria, he developed a machine learning* algorithm that could scan a web page and determine whether the information it contained was credible or untrustworthy with 80-85% accuracy. With refinement, he knew his model would achieve even better performance.

Encouraged, Milazzo did what many others around him had done over the years: he left his job to start a new company. He incorporated as Trustie, a company that would build a browser extension to help users root out bad information and build trust in the information we consume.

What is “bad information”?

Often referred to as misinformation, disinformation, or malinformation, “bad information” is a catch-all term we use to refer to information that is confusing, false, misleading, outdated, incomplete, dangerous, or hateful – whether the person sharing it knows it is bad (disinformation) or is not aware it is (misinformation) or uses it to harm someone or a group of people (malinformation). Bad information often spreads easily like a virus even though it should not be trusted. Bad information may resemble something that is true, but unlike evidence-based knowledge (good information), it weakens and damages the communities it infects.

Milazzo recruited a team of developers and joined the Credibility Coalition, a community of researchers, journalists, academics, and others interested in establishing standards for “the veracity, quality, and credibility of online information that is a foundation of civil society.” The Coalition gave him insight into marketplace efforts to suppress the spread of bad information.

By August, the team had built a prototype of a browser extension and Trustie had filed a provisional application for a patent on its machine learning credibility system. The browser extension, powered by a more robust version of Milazzo’s initial model, would signal users how much a web page could be trusted – green for “highly trustworthy”, orange for “somewhat trustworthy but reasons to be skeptical”, and red for “don’t believe it”.

Then they conducted some market testing and ran into an obstacle.

“We did a whole bunch of customer surveys,” Milazzo recalled. “And 100% of the people we surveyed said, this is a tool that’s needed right now. You need to build this.

“And 100% of the people said, I would not use it, but I know lots of people who would.” Milazzo laughed at this devastating punchline, making the meaning clear: they had built a product the end user was unlikely to use, let alone pay for.

He explained, “When confronted with this issue, people believe they have a better understanding of the world than they actually do.”

Author’s note: As a member of the Credibility Coalition, I was among those who provided feedback on Milazzo’s prototype.

Milazzo and his team were undaunted. They knew the core product – the machine learning credibility system – worked and worked very well. So, they did what many startups end up doing. They shifted direction.

Looking upstream from the user, Milazzo recognized that a lot of money was being spent by advertisers to place ads on web pages that spread bad information. Over $63 billion was spent in 2019 on programmatic advertising – that is, advertising in which the decision-making to buy and sell ads is automated through ad networks, meaning advertisers often don’t know where their ads are being placed. Brand safety had become a concern after two of the largest advertisers, AT&T and Verizon, pulled their ads from Google’s ad network in 2017 when they discovered their ads were appearing on YouTube videos promoting terrorism and hate speech. By 2019, most networks excluded websites that contain pornography and acts of violence, but many still lacked consistent policies relating to sites that contained bad information and few could weed out specific web pages with bad information.

That’s where Milazzo saw an opportunity.

Rebranding as Nobl Media, “the world’s first brand responsibility engine”, the company pivoted to offer advertisers the opportunity to add web page credibility as a filter for where their ads could be placed. The company was not first-to-market with this kind of natural language machine learning capability, but it was the first to target the advertiser’s ethics and social responsibility. Nobl’s technology evaluated and scored web page content for hate speech, publisher reputation, hyperpartisanship, conspiracy theories, and title and article match, among other credibility criteria. The higher the score, the more credible the page and the more valuable the ad space on the page was to the advertiser’s brand.

The premium for ad space on a high scoring web page was about 15% more than a typical ad buy, according to Milazzo, but the additional cost was more than made up by the return on investment, “effectively making the service free.”

The service went live as planned early in 2020 and immediately hit another obstacle – Covid. When the pandemic shut down the economy, ad budgets crashed, forcing several clients to cancel their contracts with Nobl. Advertisers eventually rebounded following the pandemic and the company began to grow. But given the political sensitivities in an increasingly divided world around what is true, it may be no surprise that the company discovered many marketing officers had little incentive to advocate strongly enough to use the company’s credibility system.

Milazzo explained that while advertisers liked the idea, they were unwilling to push the agencies on which they depended to contract for Nobl’s service. And most agencies – Milazzo singled out Cossette, a Canadian agency, as an exception – saw no reason to disrupt an ad network system that was working well for them.

At its peak, Nobl served 50 customers – enough to demonstrate the effectiveness of its model. Milazzo said studies showed that implementing Nobl technology improved the return on ad spend by up to 150%. It was not, however, enough to sustain the business. After five years, he shut down ongoing sales and development efforts in January 2024. The company continued to service its existing clients as the team looked for an exit strategy.

Finally, in November, nine months after shutting down, Milazzo sold the company. While he could not yet share the details, he did say that the new company wanted to integrate Nobl’s technology into its own efforts to categorize news stories.

The business may not have thrived, but the technology had proved its value and will live on. As those currently in power continue to promote a national narrative founded on lies and distortions, we are eager to see what Milazzo’s bad information detector can do in its next iteration.

As for Milazzo, he’s cast a wide net as he looks for his next move. He’s ready to join an existing company to replenish his bank account and sink his teeth into another worthwhile project, whether it’s finding new ways to use AI to fight the spread of bad information or something else.

The Bias in Technology

Milazzo discussed the challenges of making Nobl’s AI technology platform in a world consumed by perceptions of bias, including the potential that the company’s work might be accused of bias.

“We all tried very hard to be not non-partisan, nonpolitical. But,” he said, recalling a quote, “‘reality is biased’, right?”

During a panel on AI sponsored by the Tortora Brayda Institute, a think tank for AI and cybersecurity, Milazzo addressed the inherent bias of AI models like ChatGPT and emphasized the need to be transparent about what data is collected, how it’s collected, and who collected and curated the data.

In his view, building an AI model is all about the data used to train the AI. Without quality data going into the model, we should not expect quality outputs. And without transparency around the who, what, and how of that data collection, the model should not be trusted.

“In the end, everybody’s going to have some sort of bias, and if you are putting out an AI solution, you should be transparent about what biases are still present.”

Author: George Linzer
Published: March 25, 2025

Please support our work

We are committed to covering
the growing effort to solve problems
for the public good.

Sources

Interviews with Cedar Milazzo,  Apr 19, 2024, and Feb 5, 2025, plus follow- up emails

Annie Gowan, “As mob lynchings fueled by WhatsApp messages sweep India, authorities struggle to combat fake news”, Washington Post, Jul 2, 2025, https://wapo.st/4b6hh3I, accessed Feb 19, 2025

Agence France-Presse, “Death by ‘fake news’: Social media-fuelled lynchings shock India”, The Straits Times, Jul 14, 2018, https://www.straitstimes.com/asia/south-asia/death-by-fake-news-social-media-fuelled-lynchings-shock-india, accessed Feb 19, 2025

Timothy McLaughlin, “How WhatsApp Fuels Fake News and Violence in India”, Wired, Dec 12, 2018, https://www.wired.com/story/how-whatsapp-fuels-fake-news-and-violence-in-india/, accessed Feb 15, 2025

Media Defence, “Misinformation, Disinformation, and Mal-Information”, https://www.mediadefence.org/ereader/publications/modules-on-litigating-freedom-of-expression-and-digital-rights-in-south-and-southeast-asia/module-8-false-news-misinformation-and-propaganda/misinformation-disinformation-and-mal-information/, accessed Mar 19, 2025

Credibility Coalition, homepage, https://credibilitycoalition.org/, accessed Feb 15, 2025

Cedar Woodchopper Milazzo,Jacob Bailly, Nameer Hirschkind, Elizabeth Earle, “Systems and Methods for Determining Credibility at Scale”, US Patent and Trade Office, Dec 23, 2021, https://ppubs.uspto.gov/dirsearch-public/print/downloadBasicPdf/20210397668?requestToken=eyJzdWIiOiI0Nzc1NTg5NC1lZDFiLTQyNmUtOGRiYi0yNWFmNzQzZjUwZTUiLCJ2ZXIiOiIyYzI4ZTI3OS04YmQ3LTRlMTctYWI1MS1hNjY5ZWU3MzViODMiLCJleHAiOjB9, accessed Mar 19, 2025

Jeremy Goldman, “US programmatic ad spending set to reach nearly $180 billion by 2025”, E-Marketer, Jan 11, 2024, https://www.emarketer.com/content/programmatic-ad-spending-set-reach-nearly-180-billion-by-2025, accessed Mar 24, 2025

Catherine Shu, “AT&T and Verizon join advertising boycott against Google over offensive YouTube videos”, TechCrunch, Mar 23, 2017, https://techcrunch.com/2017/03/23/att-verizon-boycott-google-ads/, accessed Mar 24, 2025

Brock Munro, “What is Brand Safety? Why It’s Important for Publishers”, Publift, Jan 28, 2025, https://www.publift.com/blog/brand-safety, accessed Mar 24, 2025

Tortora Brayda Institute for AI & Cybersecurity, “How AI Is Solving The World’s Most Difficult Problems”, https://www.linkedin.com/events/7112854595466379264/comments/, accessed Apr 12, 2024

Have a Suggestion?

Know a leader? Progress story? Cool tool? Want us to cover a new problem?

Leader Profiles

  • Cedar Milazzo: Combating the Spread of Bad Information

  • Cathy Giessel: Model for Governing, Election Reforms

  • Ty Seidule: Dismantling the Lost Cause

Progress Updates

  • Sen. Murphy Calls Out Trump Corruption

  • Companies Resist Conservatives’ War on DEI

  • Making Progress Under Trump 2.0