Agentic Safety and Ecosystem Architect, Trust and Safety
Company: Google
Location: Kirkland
Posted on: April 1, 2026
|
|
|
Job Description:
info_outline X In accordance with Washington state law, we are
highlighting our comprehensive benefits package, which is available
to all eligible US based employees. Benefits for this role include:
Health, dental, vision, life, disability insurance Retirement
Benefits: 401(k) with company match Paid Time Off: 20 days of
vacation per year, accruing at a rate of 6.15 hours per pay period
for the first five years of employment Sick Time: 40 hours/year
(statutory, where applicable); 5 days/event (discretionary)
Maternity Leave (Short-Term Disability Baby Bonding): 28-30 weeks
Baby Bonding Leave: 18 weeks Holidays: 13 paid days per year Note:
By applying to this position you will have an opportunity to share
your preferred working location from the following: Kirkland, WA,
USA; Austin, TX, USA . Minimum qualifications: Bachelor's degree or
equivalent practical experience. 7 years of experience in data
analytics, Trust and Safety, policy, cybersecurity, or related
fields. Experience with ML/AI based programs in Trust and Safety.
Preferred qualifications: Master's degree or PhD in a relevant
field. Experience in agentic architecture and data permissions.
Experience in technical product design and AI systems. Experience
in program management. Understanding of AI/ML system safety and
integrity. About the job Trust & Safety team members are tasked
with identifying and taking on the biggest problems that challenge
the safety and integrity of our products. They use technical
know-how, excellent problem-solving skills, user insights, and
proactive communication to protect users and our partners from
abuse across Google products like Search, Maps, Gmail, and Google
Ads. On this team, you're a big-picture thinker and strategic
team-player with a passion for doing what’s right. You work
globally and cross-functionally with Google engineers and product
managers to identify and fight abuse and fraud cases at Google
speed - with urgency. And you take pride in knowing that every day
you are working hard to promote trust in Google and ensuring the
highest levels of user safety. As an Agentic Safety and Ecosystem
Architect, you will design for the safety and integrity of
autonomous AI agents across the Android platform. Your focus is to
bridge the gap between technical agent capabilities and the
real-world safety of billions of users. You will be responsible for
identifying and mitigating the harms that arise when AI agents
interact with the OS and third-party apps—ranging from system-level
instability to the creation of harmful synthetic media. By
developing detection nets and enforcement frameworks, you will
ensure that agentic features do not become vectors for abuse. At
Google we work hard to earn our users’ trust every day. Trust &
Safety is Google’s team of abuse fighting and user trust experts
working daily to make the internet a safer place. We partner with
teams across Google to deliver bold solutions in abuse areas such
as malware, spam and account hijacking. A team of Analysts, Policy
Specialists, Engineers, and Program Managers, we work to reduce
risk and fight abuse across all of Google’s products, protecting
our users, advertisers, and publishers across the globe in over 40
languages. The US base salary range for this full-time position is
$142,000-$205,000 bonus equity benefits. Our salary ranges are
determined by role, level, and location. Within the range,
individual pay is determined by work location and additional
factors, including job-related skills, experience, and relevant
education or training. Your recruiter can share more about the
specific salary range for your preferred location during the hiring
process. Please note that the compensation details listed in US
role postings reflect the base salary only, and do not include
bonus, equity, or benefits. Learn more about benefits at Google .
Responsibilities Develop the rules of engagement for autonomous
agents on Android, ensuring that multi-step plans are audited for
safety before execution. Implement strict "least-privilege" models
so agents cannot escalate system permissions or access sensitive
user data without explicit, context-aware consent. Partner with the
Product team to build runtime monitoring that identifies "agentic
drift," recursive loops, or adversarial attempts to hijack agent
reasoning. Create self-testing kits for the Android developer
community to ensure that third-party agents are built with brakes
and kill-switches.
Keywords: Google, Seattle Hill-Silver Firs , Agentic Safety and Ecosystem Architect, Trust and Safety, IT / Software / Systems , Kirkland, Washington