Daniel Aaronson is a Strategist of Trust & Safety at Google. Prior to joining Google in 2016, Daniel was a Sr. Consultant in the Risk & Compliance practice at Protiviti San Francisco where he focused on credit risk, compliance, fraud and other business risks. While at Protiviti, he helped found a FinTech group specializing in risk and compliance in unregulated spaces. Daniel also worked as a Risk & Fraud Specialist at Square. Connect with Daniel on LinkedIn.
What do you do at Google? Your bio says you help fight the bad guys!
My title is Strategist for Google Search Trust & Safety. I investigate and try to prevent any abuse that involves Google Search. An example includes someone abusing Search algorithms by hacking websites and injecting bad content to gain profit. The other side of abuse can occur on products that use our comprehensive Search Index to answer questions or provide factual information, such as the Google Assistant or Google Home. We want to avoid providing answers that are offensive, biased, and/or controversial.
For example, if you ask your Google Home “Who should I vote for?” we wouldn’t want to accidentally promote a single candidate with a specific answer. We also wouldn’t want to source information that might be part of a misinformation campaign or pull answers from a Wikipedia page that was vandalized with bad edits. There is a lot of pressure on companies to do more about avoiding potential election meddling in ads, social media and other venues as well as remaining neutral in people’s search for information. This is a difficult but important task that involves a lot of complex problem solving, giving us a lot of interesting things to work on.
What’s another example of abuse?
A common type of abuse is that a hacker will search for websites running old versions of software and have well known or existing vulnerabilities. So they’ll hack the website and inject new, spammy pages without the owner of the website knowing. The pages might phish people for information or link to really bad content.
So how are you approaching the more complicated type of abuse?
Very cautiously and with a lot of work! Any solution needs to be able to scale exponentially. We’re talking about trillions of searches across trillions of online pages. Google gets new searches that we’ve never seen before every day. It’s not just preparing for things we’ve seen, but how can we foresee what people might ask in the future?
Also, when news is changing constantly, we want to make sure we’re getting things right and aren’t providing bad information. There are bad actors all over the world trying to intentionally spread false information in a variety of ways. We try to investigate and analyze the data so that we can prevent abuse from surfacing on various Google products.
How has your role changed since joining Google?
As we release more products, like Google Assistant and Google Home which were publicly released after I joined, the focus of the kinds of abuse we need to look for evolves. Imagine asking your Google Home “who should I vote for?” and getting one answer! We want to avoid that. Each product needs to be treated differently with its own sets of unique policies designed to protect our users while also protecting the freedom of information on the internet. A website that endorses a specific candidate might be okay to surface in a regular Search Result (if that’s what the user is looking for) but might appear biased if it were the single answer to a neutral question on Google Assistant.
Given the importance of identifying bias in your role, how does your team approach training?
We go through trainings to make us aware of our own unconscious biases. We also have many internal tech talks that address bias and fairness, even providing resources and information about how avoid bias in Machine Learning. We all can have unconscious biases that are a result of the environment we grew up in. It’s important to recognize how those can affect your work and how they might influence our users. All of this research and work is very new though, so our work here is always evolving.
We try to create an open atmosphere to talk about these topics, to think about them, and be aware of them. We also try to hire a really diverse team. A lot of studies have proven that more diversity leads to better success.
What’s the most important skill in spotting abuse?
Intuition. Identifying things that don’t feel right and listening to that gut intuition that helps you find potential vectors for risk or abuse is really, really important.
What’s something you didn’t expect about your role?
Google has been around for a long time. I think of search and how it’s been an aspect of my life forever and yet – even right now – there’s still so many new ways people are abusing it, or abusing the products that use it. I’ve been able to come in and be part of really big, exciting projects and actually have an impact. I thought I’d have to be at Google for 15 years for that, but I’ve been getting those opportunities early on.
Who should be featured in the next Alumni Spotlight? Let us know at Alumni@Protiviti.com!