Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

When External Data Becomes a Trojan Horse

6 Jun 2025 - Volkan Kutal

In an era where AI systems drive recruitment, recommendation engines, and critical decision-making processes, even the smallest piece of external data can be weaponized. In our latest project, we combined the AI Recruiter with the XPIA Attack. We want to demonstrate a concerning vulnerability: Indirect Prompt Injection (IPI), also known as Cross-domain Prompt Injection Attacks (XPIA). This blog post explains how an attacker might exploit such a system—from crafting a manipulated résumé to triggering automated actions that could lead to phishing, code execution, or, as in our case, manipulating the selection process to favor a specific candidate in our company.

Understanding the AI Recruiter Project

The AI Recruiter is designed to match résumés with job descriptions using GPT-4o, while also serving as a testing ground for Retrieval-Augmented Generation (RAG) vulnerabilities through Cross-domain Prompt Injection Attacks (XPIA). It integrates ChromaDB for semantic search, while also leveraging PyRIT’s XPIA Attack to automate attacks, allowing AI red-teaming workflows for security research in AI pipelines.

Key Features


The Exploit in Detail: Step-by-Step

1. Choosing the Target Job

An attacker identifies a job posting they wish to apply for. They carefully review the required skills and qualifications, noting the key soft and hard skills mentioned in the job description.

2. Crafting the Manipulated Résumé

Instead of submitting a standard résumé, the attacker uses the PDF converter integrated in PyRIT to inject hidden content into their résumé. The same PDF converter is also an integral part of the XPIA Attack’s workflow, ensuring that the manipulation is both stealthy and effective. Here’s how it works:

3. Uploading via XPIA Attack

Once the résumé has been manipulated by the PDF converter, it is uploaded through the XPIA Attack:

4. AI Evaluation with GPT-4o

The next stage involves in-depth evaluation:

5. The Phishing Risk

The consequences of manipulation do not stop at résumé scoring:


Beyond Recruitment: The Broader Threat Landscape

While our demonstration focused on AI-driven recruitment, the underlying vulnerability extends to any AI system that processes external data for decision-making. Microsoft’s AI Red Teaming efforts, as outlined in the AI Red Team Lessons eBook, have identified similar risks, including prompt injection, manipulated feedback loops, and unauthorized automation. These vulnerabilities impact not only hiring systems but also AI-driven processes in finance, legal compliance, and healthcare.

Manipulated AI-Generated Summaries in Decision Systems

Large-Scale Automated Manipulation in AI Systems


Conclusion

Returning to our demonstration with the XPIA Attack and AI Recruiter, we’ve exposed a fundamental weakness: external data can be weaponized to manipulate AI-driven decision-making. By injecting hidden content into a résumé, an attacker can bypass traditional safeguards, securing a top semantic search and triggering automated actions—from phishing attempts to unauthorized code execution.

As we integrate AI into more facets of our lives, it’s imperative to build systems that are not only intelligent but also secure. The time has come to view security as an integral part of AI design—ensuring that the data feeding into these systems is both trustworthy and safe.


Explore More: