ReadonlyREST
Search…
For Elasticsearch

Overview: The ReadonlyREST Suite

ReadonlyREST is a light weight Elasticsearch plugin that adds encryption, authentication, authorization and access control capabilities to Elasticsearch embedded REST API. The core of this plugin is an ACL engine that checks each incoming request through a sequence of rules a bit like a firewall. There are a dozen rules that can be grouped in sequences of blocks and form a powerful representation of a logic chain.
The Elasticsearch plugin known as ReadonlyREST Free is released under the GPLv3 license, or alternatively, a commercial license (see ReadonlyREST Embedded) and lays the technological foundations for the companion Kibana plugin which is released in two versions: ReadonlyREST PRO and ReadonlyREST Enterprise.
Unlike the Elasticsearch plugin, the Kibana plugins are commercial only. But rely on the Elasticsearch plugin in order to work.
For a description of the Kibana plugins, skip to the dedicated documentation page instead.

ReadonlyREST Free plugin for Elasticsearch

In this document we are going to describe how to operate the Elasticsearch plugin in all its features. Once installed, this plugin will greatly extend the Elasticsearch HTTP API (port 9200), adding numerous extra capabilities:
    Encryption: transform the Elasticsearch API from HTTP to HTTPS
    Authentication: require credentials
    Authorization: declare groups of users, permissions and partial access to indices.
    Access control: complex logic can be modeled using an ACL (access control list) written in YAML.
    Audit logs: a trace of the access requests can be logged to file or index (or both).

Flow of a Search Request

The following diagram models an instance of Elasticsearch with the ReadonlyREST plugin installed, and configured with SSL encryption and an ACL with at least one "allow" type ACL block.
readonlyrest request processing diagram
    1.
    The User Agent (i.e. cURL, Kibana) sends a search request to Elasticsearch using the port 9200 and the HTTPS URL schema.
    2.
    The HTTPS filter in ReadonlyREST plugin unwraps the SSL layer and hands over the request to Elasticsearch HTTP stack
    3.
    The HTTP stack in Elasticsearch parses the HTTP request
    4.
    The HTTP handler in Elasticsearch extracts the indices, action, request type and creates a SearchRequest (internal Elasticsearch format).
    5.
    The SearchRequest goes through the ACL (access control list), external systems like LDAP can be asynchronously queried, and an exit result is eventually produced.
    6.
    The exit result is used by the audit log serializer, to write a record to index and/or Elasticsearch log file
    7.
    If no ACL block was matched, or if a type: forbid block was matched, ReadonlyREST does not forward the search request to the search engine, and creates an "unauthorized" HTTP response.
    8.
    In case the ACL matched an type: allow block, the request is forwarded to the search engine
    9.
    The Elasticsearch code creates a search response containing the results of the query
    10.The search response is converted to an HTTP response by the Elasticsearch code
    10.
    The HTTP response flows back to ReadonlyREST's HTTPS filter and to the User agent

Installing the plugin

To install ReadonlyREST plugin for Elasticsearch:

1. Obtain the build

From the official download page. Select your Elasticsearch version and send yourself a link to the compatible ReadonlyREST zip file.

2. Install the build

1
bin/elasticsearch-plugin install file:///tmp/readonlyrest-X.Y.Z_esW.Q.U.zip
Copied!
Notice how we need to type in the format file:// + absolute path (yes, with three slashes).
1
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
2
@ WARNING: plugin requires additional permissions @
3
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Copied!
When prompted about additional permissions, answer y.

3.Create settings file

Create and edit the readonlyrest.yml settings file in the same directory where elasticsearch.yml is found:
1
vim $ES_PATH_CONF/conf/readonlyrest.yml
Copied!
Now write some basic settings, just to get started. In this example we are going to tell ReadonlyREST to require HTTP Basic Authentication for all the HTTP requests, and return 401 Unauthorized otherwise.
1
readonlyrest:
2
access_control_rules:
3
4
- name: "Require HTTP Basic Auth"
5
type: allow
6
auth_key: user:password
Copied!

4. Disable X-Pack security module

(applies to ES 6.4.0 or greater)
ReadonlyREST and X-Pack security module can't run together, so the latter needs to be disabled.
Edit elasticsearch.yml and append xpack.security.enabled: false.
1
vim $ES_PATH_CONF/conf/elasticsearch.yml
Copied!

5. Start Elasticsearch

1
bin/elasticsearch
Copied!
or:
1
service start elasticsearch
Copied!
Depending on your environment.
Now you should be able to see the logs and ReadonlyREST related lines like the one below:
1
[2018-09-18T13:56:25,275][INFO ][o.e.p.PluginsService ] [c3RKGFJ] loaded plugin [readonlyrest]
Copied!

6. Test everything is working

The following command should succeed, and the response should show a status code 200.
1
curl -vvv -u user:password "http://localhost:9200/_cat/indices"
Copied!
The following command should not succeed, and the response should show a status code 401
1
curl -vvv "http://localhost:9200/_cat/indices"
Copied!

Upgrading the plugin

To upgrade ReadonlyREST for Elasticsearch:

1. Stop Elasticsearch.

Either kill the process manually, or use:
1
service stop elasticsearch
Copied!
depending on your environment.

2. Uninstall ReadonlyREST

1
bin/elasticsearch-plugin remove readonlyrest
Copied!

3. Install the new version of ReadonlyREST into Elasticsearch.

1
bin/elasticsearch-plugin install file://<download_dir>/readonlyrest-<ROR_VERSION>_es<ES_VERSION>.zip
Copied!
e.g.
1
bin/elasticsearch-plugin install file:///tmp/readonlyrest-1.16.15_es6.1.1.zip
Copied!

4. Restart Elasticsearch.

1
bin/elasticsearch
Copied!
or:
1
service start elasticsearch
Copied!
Depending on your environment.
Now you should be able to see the logs and ReadonlyREST related lines like the one below:
1
[2018-09-18T13:56:25,275][INFO ][o.e.p.PluginsService ] [c3RKGFJ] loaded plugin [readonlyrest]
Copied!

Removing the plugin

1. Stop Elasticsearch.

Either kill the process manually, or use:
1
service stop elasticsearch
Copied!
depending on your environment.

2. Uninstall ReadonlyREST from Elasticsearch:

1
bin/elasticsearch-plugin remove readonlyrest
Copied!

3. Start Elasticsearch.

1
bin/elasticsearch
Copied!
or:
1
service start elasticsearch
Copied!
Depending on your environment.

Deploying ReadonlyREST in a stable production cluster

Unless some advanced features are being used (see below),this Elasticsearch plugin operates like a lightweight, stateless filter glued in front of Elasticsearch HTTP API. Therefore it's sufficient to install the plugin only in the nodes that expose the HTTP interface (port 9200).
Installing ReadonlyREST in a dedicated node has numerous advantages:
    No need to restart all nodes, only the one you have installed the plugin into.
    No need to restart all nodes for updating the security settings
    No need to restart all nodes when a security update is out
    Less complexity on the actual cluster nodes.
For example, if we want to move to HTTPS all the traffic coming from Logstash into a 9 nodes Elasticsearch cluster which has been running stable in production for a while, it's not necessary to install ReadonlyREST plugin in all the nodes.
Creating a dedicated, lightweight ES node where to install ReadonlyREST:
    1.
    (Optional) disable the HTTP interface from all the existing nodes
    2.
    Create a new, lightweight, dedicated node without shards, nor master eligibility.
    3.
    Configure ReadonlyREST with SSL encryption in the new node
    4.
    Configure Logstash to connect to the new node directly in HTTPS.

An exception

⚠️IMPORTANT By default when fields rule is used, it's required to install ReadonlyREST plugin in all the data nodes.

ACL basics

The core of this plugin is an ACL (access control list). A logic structure very similar to the one found in firewalls. The ACL is part of the plugin configuration, and it's written in YAML.
    The ACL is composed of an ordered sequence of named blocks
    Each block contains some rules, and a policy (forbid or allow)
    HTTP requests run through the blocks, starting from the first,
    The first block that satisfies all the rules decides if to forbid or allow the request (according to its policy).
    If none of the block match, the request is rejected
⚠️IMPORTANT: The ACL blocks are evaluated sequentially, therefore the ordering of the ACL blocks is crucial. The order of the rules inside an ACL block instead, is irrelevant.
1
readonlyrest:
2
access_control_rules:
3
4
- name: "Block 1 - only Logstash indices are accessible"
5
type: allow # <-- default policy type is "allow", so this line could be omitted
6
indices: ["logstash-*"] # <-- This is a rule
7
8
- name: "Block 2 - Blocking everything from a network"
9
type: forbid
10
hosts: ["10.0.0.0/24"] # <-- this is a rule
Copied!
An Example of Access Control List (ACL) made of 2 blocks.
The YAML snippet above, like all of this plugin's settings should be saved inside the readonlyrest.yml file. Create this file on the same path where elasticsearch.yml is found.
TIP: If you are a subscriber of the PRO or Enterprise Kibana plugin, you can edit and refresh the settings through a GUI. For more on this, see the documentation for ReadonlyREST plugin for Kibana.

Encryption

An SSL encrypted connection is a prerequisite for secure exchange of credentials and data over the network. Letsencrypt certificates work just fine, once they are inside a JKS keystore. ReadonlyREST can be configured to encrypt network traffic on two independent levels:

External REST API

It wraps connection between client and exposed REST API in SSL context, hence making it encrypted and secure. ⚠️IMPORTANT: To enable SSL for REST API, open elasticsearch.yml and append this one line:
1
http.type: ssl_netty4
Copied!
Now in readonlyrest.yml add the following settings:
1
readonlyrest:
2
ssl:
3
keystore_file: "keystore.jks"
4
keystore_pass: readonlyrest
5
key_pass: readonlyrest
Copied!
The keystore should be stored in the same directory with elasticsearch.yml and readonlyrest.yml.

Internode communication - transport module

This option encrypts communication between nodes forming Elasticsearch cluster.
⚠️IMPORTANT: To enable SSL for internode communication open elasticsearch.yml and append this one line:
1
transport.type: ror_ssl_internode
Copied!
In readonlyrest.yml following settings must be added (it's just example configuration presenting most important properties):
1
readonlyrest:
2
ssl_internode:
3
keystore_file: "keystore.jks"
4
keystore_pass: readonlyrest
5
key_pass: readonlyrest
Copied!
Similar to ssl for HTTP, the keystore should be stored in the same directory with elasticsearch.yml and readonlyrest.yml. This config must be added to all nodes taking part in encrypted communication within cluster.

Certificate verification

By default certificate verification is disabled. It means that certificate is not validated in any way, so all certificates are accepted. It is useful on local/test environment, where security is not the most important concern. On production environment it is adviced to enable this option. It can be done by means of:
1
certificate_verification: true
Copied!
under ssl_internode section. This option is applicable only for internode ssl.

Client authentication

By default the client authentication is disabled. When enabled, the server asks the client about its certificate, so ES is able to verify the client's identity. It can be enabled by means of:
1
client_authentication: true
Copied!
under ssl section. This option is applicable only for REST API external ssl.
Both certificate_verification and client_authentication can be enabled with single property verification. Using this property is deprecated and allowed because of backward compatibility support. Specialized properties make configuration more readable and explicit.

Restrict SSL protocols and ciphers

Optionally, it's possible to specify a list allowed SSL protocols and SSL ciphers. Connections from clients that don't support the listed protocols or ciphers will be dropped.
1
readonlyrest:
2
ssl:
3
# put the keystore in the same dir with elasticsearch.yml
4
keystore_file: "keystore.jks"
5
keystore_pass: readonlyrest
6
key_pass: readonlyrest
7
allowed_protocols: [TLSv1.2]
8
allowed_ciphers: [TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]
Copied!
ReadonlyREST will log a list of available ciphers and protocols supported by the current JVM at startup.
1
[2018-01-03T10:09:38,683][INFO ][t.b.r.e.SSLTransportNetty4] ROR SSL: Available ciphers: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA
2
[2018-01-03T10:09:38,684][INFO ][t.b.r.e.SSLTransportNetty4] ROR SSL: Restricting to ciphers: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
3
[2018-01-03T10:09:38,684][INFO ][t.b.r.e.SSLTransportNetty4] ROR SSL: Available SSL protocols: TLSv1,TLSv1.1,TLSv1.2
4
[2018-01-03T10:09:38,685][INFO ][t.b.r.e.SSLTransportNetty4] ROR SSL: Restricting to SSL protocols: TLSv1.2
5
[2018-0
Copied!

Custom truststore

ReadonlyREST allows using custom truststore, replacing (provided by JRE) default one. Custom truststore can be set with:
1
truststore_file: "truststore.jks"
2
truststore_pass: truststorepass
Copied!
under ssl or ssl_internode section. This option is applicable for both ssl modes - external ssl and internode ssl. The truststore should be stored in the same directory with elasticsearch.yml and readonlyrest.yml (like keystore). When not specifed, ReadonlyREST uses default truststore.

Blocks of rules

Every block must have at least the name field, and optionally a type field valued either "allow" or "forbid". If you omit the type, your block will be treated as type: allow by default.
Keep in mind that ReadonlyREST ACL is a white list, so by default all request are blocked, unless you specify a block of rules that allowes all or some requests.
    name will appear in logs, so keep it short and distinctive.
    type can be either allow or forbid. Can be omitted, default is allow.
1
- name: "Block 1 - Allowing anything from localhost"
2
type: allow
3
# In real life now you should increase the specificity by adding rules here (otherwise this block will allow all requests!)
Copied!
Example: the simplest example of an allow block.
⚠️IMPORTANT: if no blocks are configured, ReadonlyREST rejects all requests.

Rules

ReadonlyREST access control rules allow to take decisions on three levels:
    Network level
    HTTP level
    Elasticsearch level
Please refrain from using HTTP level rules to protect certain indices or limit what people can do to an index. The level of control at this level is really coarse, especially because Elasticsearch REST API does not always respect RESTful principles. This makes of HTTP a bad abstraction level to write ACLs in Elasticsearch all together.
The only clean and exhaustive way to implement access control is to reason about requests AFTER ElasticSearch has parsed them. Only then, the list of affected indices and the action will be known for sure. See Elasticsearch level rules.

Transport level rules

These are the most basic rules. It is possible to allow/forbid requests originating from a list of IP addresses, host names or IP networks (in slash notation).

hosts

hosts: ["10.0.0.0/24"] Match a request whose origin IP address (also called origin address, or OA in logs) matches one of the specified IP addresses or subnets.

hosts_local

hosts_local: ["127.0.0.1", "127.0.0.2"] Match a request whose destination IP address (called DA in logs) matches one of the specified IP addresses or subnets. This finds application when Elasticsearch HTTP API is bound to multiple IP addresses.

HTTP Level rules

accept_x-forwarded-for_header

accept_x-forwarded-for_header: false
⚠️DEPRECATED (use x_forwarded_for instead) A modifier for hosts rule: if the origin IP won't match, fallback to check the X-Forwarded-For header

x_forwarded_for

x_forwarded_for: ["192.168.1.0/24"]
Behaves exactly like hosts, but gets the source IP address (a.k.a. origin address, OA in logs) inside the X-Forwarded-For header only (useful replacement to hostsrule when requests come through a load balancer like AWS ELB)
Load balancers
This is a nice tip if your Elasticsearch is behind a load balancer. If you want to match all the requests that come through the load balancer, use x_forwarded_for: ["0.0.0.0/0"]. This will match the requests with a valid IP address as a value of the X-Forwarded-For header.
DNS lookup caching
It's worth to note that resolutions of DNS are going to be cached by JVM. By default successfully resolved IPs will be cached forever (until Elasticsearch is restarted) for security reasons. However, this may not always be the desired behaviour, and it can be changed by adding the following JVM options either in the jvm.options file or declaring the ES_JAVA_OPTS environment variable: sun.net.inetaddr.ttl=TTL_VALUE (or/and sun.net.inetaddr.negative.ttl=TTL_VALUE). More details about the problem can be found here.

methods

methods: [GET, DELETE]
Match requests with HTTP methods specified in the list. N.B. Elasticsearch HTTP stack does not make any difference between HEAD and GET, so all the HEAD request will appear as GET.

headers

headers: ["h1:x*y","~h2:*xy"]
Match if all the HTTP headers in the request match the defined patterns in headers rule. This is useful in conjunction with proxy_auth, to carry authorization information (i.e. headers: x-usr-group: admins).
The ~ sign is a pattern negation, so eg. ~h2:*xy means: match if h2 header's value does not match the pattern *xy, or h2 is not present at all.

headers_and

headers_and: ["hdr1:val_*xyz","~hdr2:xyz_*"]
Alias for headers rule

headers_or

headers_or: ["x-myheader:val*","~header2:*xy"]
Match if at least one the specified HTTP headers key:value pairs is matched.

uri_re

uri_re: ["^/secret-index/.*", "^/some-index/.*"]
☠️HACKY (try to use indices/actions rule instead)
Match if at least one specifed regular expression matches requested URI.

maxBodyLength

maxBodyLength: 0
Match requests having a request body length less or equal to an integer. Use 0 to match only requests without body.
NB: Elasticsearch HTTP API breaks the specifications, nad GET requests might have a body length greater than zero.

api_keys

api_keys: [123456, abcdefg]
A list of api keys expected in the header X-Api-Key

Elasticsearch level rules

indices

indices: ["sales", "logstash-*"]
Matches if the request involves a set of indices whose name is "sales", or starts with the string "logstash-", or a combination of both.
If a request involves a wildcard (i.e. "logstash-*", "*"), this is first expanded to the list of available indices, and then treated normally as follows:
    Requests that do not involve any indices (cluster admin, etc) result in a "match".
    Requests that involve only allowed indices result in a "match".
    Requests that involve a mix of allowed and prohibited indices, are rewritten to only involve allowed indices, and result in a "match".
    Requests that involve only prohibited indices result in a "no match". And the ACL evaluation moves on to the next block.
The rejection message and HTTP status code returned to the requester are chosen carefully with the main intent to make ES behave like the prohibited indices do not exist at all.
The rule has also an extended version:
1
indices:
2
patterns: ["sales", "logstash-*"]`
3
must_involve_indices: false
Copied!
The definition above has the same meaning as the shortest version shown at the beginning of this section. By default the rule will be matched when a request doesn't involve indices (eg. /_cat/nodes request). But we can change the behaviour by configuring must_involve_indices: true - in this case the request above will be rejected by the rule.
In detail, with examples
In ReadonlyREST we roughly classify requests as:
    "read": the request will not change the data or the configuration of the cluster
    "write": when allowed, the request changes the internal state of the cluster or the data.
If a read request asks for a some indices they have permissions for and some indices that they do NOT have permission for, the request is rewritten to involve only the subset of indices they have permission for. This is behaviour is very useful in Kibana: different users can see the same dashboards with data from only their own indices.
When the subset of indices is empty, it means that user are not allowed to access requested indices. In multitenancy environment we should consider two options:
    requested indices don't exist
    requested indices exist but logged user is not authorized to access them
For both of these cases ROR is going to return HTTP 404 or HTTP 200 with an empty response. The same behaviour will be observed for ES with ROR disabled (for nonexistent index). If an index does exist, but a user is not authorized to access it, ROR is going to pretend that the index doesn't exist and a response will be the same like the index actually did not exist. See detailed example.
It's also worth mentioning, that when prompt_for_basic_auth is set to true (that is, the default value), ROR is going to return 401 instead of 404 HTTP status code. It is relevant for users who don't use ROR Kibana's plugin and who would like to take advantage of default Kibana's behaviour which shows the native browser basic auth dialog, when it receives HTTP 401 response.
If a write request wants to write to indices they don't have permission for, the write request is rejected.
Requests related to templates
Templates are also connected with indices, but rather indirectly. An index template has index patterns and could also have aliases. During an index template creation or modification, ROR checks if index patterns and aliases, defined in a request body, are allowed. When a user tries to remove or get template by name, ROR checks if the template can be considered as allowed for the user, and based on that information, it allows/forbids to remove or see it. See details.

actions

actions: ["indices:data/read/*"]
Match if the request action starts with "indices:data/read/".
In Elasticsearch, each request carries only one action. We extracted from Elasticsearch source code the full list of valid action strings as of all Elasticsearch versions. Please see the dedicated section to find the actions list of your specific Elasticsearch version.
Example actions (see above for the full list):
1
"cluster:admin/data_frame/delete"
2
"cluster:admin/data_frame/preview"
3
...
4
"cluster:monitor/data_frame/stats/get"
5
"cluster:monitor/health"
6
"cluster:monitor/main"
7
"cluster:monitor/nodes/hot_threads"
8
"cluster:monitor/nodes/info"
9
...
10
"indices:admin/aliases"
11
"indices:admin/aliases/get"
12
"indices:admin/analyze"
13
...
14
"indices:data/read/get"
15
"indices:data/read/mget"
16
"indices:data/read/msearch"
17
"indices:data/read/msearch/template"
18
...
19
"indices:data/write/bulk"
20
"indices:data/write/bulk_shard_operations[s]"
21
"indices:data/write/delete"
22
"indices:data/write/delete/byquery"
23
"indices:data/write/index"
24
"indices:data/write/reindex"
25
...
26
many more...
Copied!

kibana_access

kibana_access: ro
Enables the minimum set of actions necessary for browsers to use Kibana.
This "macro" rule allows the minimum set of actions necessary for a browser to use Kibana. This rule allows a set of actions towards the designated kibana index (see kibana_index rule - defaults to ".kibana"), plus a stricter subset of read-only actions towards other indices, which are considered "data indices".
The idea is that with one single rule we allow the bare minimum set of index+action combinations necessary to support a Kibana browsing session.
Possible access levels:
    ro_strict: the browser has a read-only view on Kibana dashboards and settings and all other indices.
    ro: some write requests can go through to the .kibana index so that UI state in discover can be saved and short urls can be created.
    rw: some more actions will be allowed towards the .kibana index only, so Kibana dashboards and settings can be modified.
    admin: like above, but has additional permissions to use the ReadonlyREST PRO/Enterprise Kibana app.
NB: The "admin" access level does not mean the user will be allowed to access all indices/actions. It's just like "rw" with settings changes privileges. If you want really unrestricted access for your Kibana user, including ReadonlyREST PRO/Enterprise app, set kibana_access: unrestricted. You can use this rule with the users rule to restrict access to selected admins.
This rule is often used with the indices rule, to limit the data a user is able to see represented on the dashboards. In that case do not forget to allow the custom kibana index in the indices rule!

kibana_index

kibana_index: .kibana-user1
Default value is .kibana
Specify to what index we expect Kibana to attempt to read/write its settings (use this together with kibana.index setting in kibana.yml.)
This value directly affects how kibana_access works because at all the access levels (yes, even admin), kibana_access rule will not maatch any write request in indices that are not the designated kibana index.
If used in conjunction with ReadonlyREST Enterprise, this rule enables multi tenancy, because in ReadonlyREST, a tenancy is identified with a set of Kibana configurations, which are by design collected inside a kibana index (default: .kibana).

snapshots

snapshots: ["[email protected]{user}_*"]
Restrict what snapshots names can be saved or restored

repositories

repositories: ["[email protected]{user}_*"]
Restrict what repositories can snapshots be saved into

filter

filter: '{"query_string":{"query":"user:@{user}"}}'
This rule enables Document Level Security (DLS). That is: return only the documents that satisfy the boolean query provided as an argument.
This rule lets you filter the results of a read request using a boolean query. You can use dynamic variables i.e. @{user} (see dedicated paragraph) to inject a user name or some header values in the query, or even environmental variables.
Example: per-user index segmentation
In the index "test-dls", each user can only search documents whose field "user" matches their user name. I.e. A user with username "paul" requesting all documents in "test-dls" index, won't see returned a document containing a field "user": "jeff" .
1
- name: "::PER-USER INDEX SEGMENTATION::"
2
proxy_auth: "*"
3
indices: ["test-dls"]
4
filter: '{"bool": { "must": { "match": { "user": "@{user}" }}}}'
Copied!
Example 2: Prevent search of "classified" documents.
In this example, we want to avoid that users belonging to group "press" can see any document that has a field "access_level" with the value "CLASSIFIED". And this policy is applied to all indices (no indices rule is specified).
1
- name: "::Press::"
2
groups: ["press"]
3
filter: '{"bool": { "must_not": { "match": { "access_level": "CLASSIFIED" }}}}'
Copied!
⚠️IMPORTANT The filterand fields rules will only affect "read" requests, therefore "write" requests will not match because otherwise it would implicitly allow clients to "write" without the filtering restriction. For reference, this behaviour is identical to x-pack and search guard.
⚠️IMPORTANT Beginning with version 1.27.0 all ROR internal requests from kibana will not match blocks containing filter and/or fields rules. There requests are used to perform kibana login and dynamic config reload.
If you want to allow write requests (i.e. for Kibana sessions), just duplicate the ACL block, have the first one with filter and/or fields rule, and the second one without.

fields

This rule enables Field Level Security (FLS). That is:
    for responses where fields with values are returned (e.g. Search/Get API) - filter and show only allowed fields
    make not allowed fields unsearchable - used in QueryDSL requests (e.g. Search/MSearch API) do not have impact on search result.
In other words: FLS protects from usage some not allowed fields for a certain user. From user's perspective it seems like such fields are nonexistent.
Definition
Field rule definition consists of two parts:
    A non empty list of fields (blacklisted or whitelisted) names. Supports wildcards and user runtime variables.
    The FLS engine definition (global setting, optional). See: engine details.
⚠️IMPORTANT With default FLS engine it's required to install ReadonlyREST plugin in all the data nodes. Different configurations allowing to avoid such requirement are described in engine details.
Field names
Fields can be defined using two access modes: blacklist and whitelist.
Blacklist mode (recommended)
Specifies which fields should not be allowed prefixed with ~ (other fields from mapping become allowed implicitly). Example:
fields: ["~excluded_fields_prefix_*", "~excluded_field", "~another_excluded_field.nested_field"]
Return documents but deprived of the fields that:
    start with excluded_fields_prefix_
    are equal to excluded_field
    are equal to another_excluded_field.nested_field
Whitelist mode
Specifies which fields should be allowed explicitly (other fields from mapping become not allowed implicitly). Example:
fields: ["allowed_fields_prefix_*", "_*", "allowed_field.nested_field.text"]
Return documents deprived of all the fields, except the ones that:
    start with allowed_fields_prefix_
    start with underscore
    are equal to allowed_field.nested_field.text
NB: You can only provide a full black list or white list. Grey lists (i.e. ["~a", "b"]) are invalid settings and ROR will refuse to boot up if this condition is detected.
Example: hide prices from catalogue indices
1
- name: "External users - hide prices"
2
fields: ["~price"]
3
indices: ["catalogue_*"]
Copied!
⚠️IMPORTANT Any metadata fields e.g. _id or _index can not be used in fields rule.
⚠️IMPORTANT The filterand fields rules will only affect "read" requests, therefore "write" requests will not match because otherwise it would implicitly allow clients to "write" without the filtering restriction. For reference, this behaviour is identical to x-pack and search guard.
⚠️IMPORTANT Beginning with version 1.27.0 all ROR internal requests from kibana will not match blocks containing filter and/or fields rules. There requests are used to perform kibana login and dynamic config reload.
If you want to allow write requests (i.e. for Kibana sessions), just duplicate the ACL block, have the first one with filter and/or fields rule, and the second one without.

Configuring an ACL with filter/fields rules when using Kibana

A normal Kibana session interacts with Elasticsearch using a mix of actions which we can roughly group in two macro categories of "read" and "write" actions. However the fields and filter rules will only match read requests. They will also block ROR internal request used to log in to kibana and reload config. This means that a complete Kibana session cannot anymore be entirely matched by a single ACL block like it normally would.
For example, this ACL block would perfectly support a complete Kibana session. That is, 100% of the actions (browser HTTP requests) would be allowed by this ACL block.
1
- name: "::RW_USER::"
2
auth_key: rw_user:pwd
3
kibana_access: rw
4
indices: ["r*", ".kibana"]
Copied!
However, when we introduce a filter (or fields) rule, this block will be able to match only some of the actions (only the "read" ones).
1
- name: "::RW_USER::"
2
auth_key: rw_user:pwd
3
kibana_access: rw # <-- won't work because of filter present in block
4
indices: ["r*", ".kibana"]
5
filter: '{"query_string":{"query":"DestCountry:FR"}}' # <-- will reject all write requests! :(
Copied!
The solution is to duplicate the block. The first one will intercept (and filter!) the read requests. The second one will intercept the remaining actions. Both ACL blocks together will entirely support a whole Kibana session.
1
- name: "::RW_USER (filter read requests)::"
2
auth_key: rw_user:pwd
3
indices: ["r*"] # <-- DO NOT FILTER THE .kibana INDEX!
4
filter: '{"query_string":{"query":"DestCountry:FR"}}'
5
6
- name: "::RW_USER (allow remaining requests)::"
7
auth_key: rw_user:pwd
8
kibana_access: rw
9
indices: ["r*", ".kibana"]
Copied!
NB: Look at how we make sure that the requests to ".kibana" won't get filtered by specifying an indices rule in the first block.
Here is another example, a bit more complex. Look at how we can duplicate the "PERSONAL_GRP" ACL block so that the read requests to the "r*" indices can be filtered, and all the other requests can be intercepted by the second rule (which is identical to the one we had before the duplication).
Before adding the filter rule:
1
- name: "::PERSONAL_GRP::"
2
groups: ["Personal"]
3
kibana_access: rw
4
indices: ["r*", "[email protected]{user}"]
5
kibana_hide_apps: ["readonlyrest_kbn", "timelion"]
6
kibana_index: "[email protected]{user}"
Copied!
After adding the filter rule (using the block duplication strategy).
1
- name: "::PERSONAL_GRP (FILTERED SEARCH)::"
2
groups: ["Personal"]
3
indices: [ "r*" ]
4
filter: '{"query_string":{"query":"DestCountry:FR"}}'
5
6
- name: "::PERSONAL_GRP::"
7
groups: ["Personal"]
8
kibana_access: rw
9
indices: ["r*", "[email protected]{user}"]
10
kibana_hide_apps: ["readonlyrest_kbn", "timelion"]
11
kibana_index: "[email protected]{user}"
Copied!

response_fields

This rule allows filtering Elasticsearch responses using a list of fields. It works in very similar way to fields rule. In contrast to fields rule, which filters out document fields, this rule filters out response fields. It doesn't make use of Field Level Security (FLS) and can be applied to every response returned by Elasticsearch.
It can be configured in two modes:
    whitelist allowing only the defined fields from the response object
    blacklist filtering out (removing) only the defined fields from the response object
Blacklist mode
Specifies which fields should be filtered out by adding the ~ prefix to the field name. Other fields in the response will be implicitly allowed. For example:
response_fields: ["~excluded_fields_prefix_*", "~excluded_field", "~another_excluded_field.nested_field"]
The above will return the usual response object, but deprived (if found) of the fields that:
    start with excluded_fields_prefix_
    are equal to excluded_field
    are equal to another_excluded_field.nested_field
Whitelist mode
In this mode rule is configured to filter out each field that isn't defined in the rule.
response_fields: ["allowed_fields_prefix_*", "_*", "allowed_field.nested_field.text"]
Return response deprived of all the fields, except the ones that:
    start with allowed_fields_prefix_
    start with underscore
    are equal to allowed_field.nested_field.text
NB: You can only provide a full black list or white list. Grey lists (i.e. ["~a", "b"]) are invalid settings and ROR will refuse to boot up if this condition is detected.
Example: allow only cluster_name and status field in cluster health response:
Without any filtering response from /_cluster/health looks more or less like:
1
{
2
"cluster_name": "ROR_SINGLE",
3
"status": "yellow",
4
"timed_out": false,
5
"number_of_nodes": 1,
6
"number_of_data_nodes": 1,
7
"active_primary_shards": 2,
8
"active_shards": 2,
9
"relocating_shards": 0,
10
"initializing_shards": 0,
11
"unassigned_shards": 2,
12
"delayed_unassigned_shards": 0,
13
"number_of_pending_tasks": 0,
14
"number_of_in_flight_fetch": 0,
15
"task_max_waiting_in_queue_millis": 0,
16
"active_shards_percent_as_number": 50.0
17
}
Copied!
but after configuring such rule:
1
- name: "Filter cluster health response"
2
uri_re: "^/_cluster/health"
3
response_fields: ["cluster_name", "status"]
Copied!
response from above will look like:
1
{
2
"cluster_name": "ROR_SINGLE",
3
"status": "yellow"
4
}
Copied!
NB: Any response field can be filtered using this rule.

Authentication

Local ReadonlyREST users are authenticated via HTTP Basic Auth. This authentication method is secure only if SSL is enabled.

auth_key

auth_key: sales:p455wd
Accepts HTTP Basic Auth. Configure this value in clear text. Clients will need to provide the header e.g. Authorization: Basic c2FsZXM6cDQ1NXdk where "c2FsZXM6cDQ1NXdk" is Base64 for "sales:p455wd".
⚠️IMPORTANT: this rule is handy just for tests, replace it with another rule that hashes credentials, like: auth_key_sha512, or auth_key_unix.

auth_key_sha512

auth_key_sha512: 280ac6f...94bf9
Accepts HTTP Basic Auth. The value is a string like username:password hashed in SHA512. Clients will need to provide the usual Authorization header.
There are also available other rules with less secure SHA algorithms auth_key_sha256 and auth_key_sha1.
The rules support also alternative syntax, where only password is hashed, eg:
auth_key_sha512: "admin:280ac6f...94bf9"
In the example below admin is the username and 280ac6f...94bf9 is the hashed secret.

auth_key_pbkdf2

auth_key_pbkdf2: "KhIxF5EEYkH5GPX51zTRIR4cHqhpRVALSmTaWE18mZEL2KqCkRMeMU4GR848mGq4SDtNvsybtJ/sZBuX6oFaSg==" # logstash:logstash
auth_key_pbkdf2: "logstash:JltDNAoXNtc7MIBs2FYlW0o1f815ucj+bel3drdAk2yOufg2PNfQ51qr0EQ6RSkojw/DzrDLFDeXONumzwKjOA==" # logstash:logstash
Accepts HTTP Basic Auth. The value is hashed in the same way as it's done in auth_key_sha512 rule, but it uses PBKDF2 key derivation function. At the moment there is no way to configure it, so during the hash generation, the user has to take into consideration the following PBKDF2 input parameters values:
Input parameter
Value
Comment
Pseudorandom function
HmacSHA512
Salt
use hashed value as a salt
eg. hashed value = logstash:logstash, use logstash:logstash as the salt
Iterations count
10000
Derived key length
512
bits
The hash can be calculated using this calculator (notice that the salt has to base Base64 encoded).

auth_key_unix

auth_key_unix: test:$6$rounds=65535$d07dnv4N$QeErsDT9Mz.ZoEPXW3dwQGL7tzwRz.eOrTBepIwfGEwdUAYSy/NirGoOaNyPx8lqiR6DYRSsDzVvVbhP4Y9wf0 # Hashed for "test:test"
⚠️IMPORTANT this hashing algorithm is very CPU intensive, so we implemented a caching mechanism around it. However, this will not protect Elasticsearch from a DoS attack with a high number of requests with random credentials.
This method is based on /etc/shadow file syntax.
If you configured sha512 encryption with 65535 rounds on your system the hash in /etc/shadow for the account test:test will be test:$6$rounds=65535$d07dnv4N$QeErsDT9Mz.ZoEPXW3dwQGL7tzwRz.eOrTBepIwfGEwdUAYSy/NirGoOaNyPx8lqiR6DYRSsDzVvVbhP4Y9wf0
1
readonlyrest:
2
access_control_rules:
3
- name: Accept requests from users in group team1 on index1
4
groups: ["team1"]
5
indices: ["index1"]
6
7
users:
8
- username: test
9
auth_key_unix: test:$6$rounds=65535$d07dnv4N$QeErsDT9Mz.ZoEPXW3dwQGL7tzwRz.eOrTBepIwfGEwdUAYSy/NirGoOaNyPx8lqiR6DYRSsDzVvVbhP4Y9wf0 #test:test
10
groups: ["team1"]
Copied!
You can generate the hash with mkpasswd Linux command, you need whois package apt-get install whois (or equivalent)
mkpasswd -m sha-512 -R 65534
Also you can generate the hash with a python script (works on Linux):
1
#!/usr/bin/python
2
import crypt
3
import random
4
import sys
5
import string
6
7
def sha512_crypt(password, salt=None, rounds=None):
8
if salt is None:
9
rand = random.SystemRandom()
10
salt = ''.join([rand.choice(string.ascii_letters + string.digits)
11
for _ in range(8)])
12
13
prefix = '$6#x27;
14
if rounds is not None:
15
rounds = max(1000, min(999999999, rounds or 5000))
16
prefix += 'rounds={0}#x27;.format(rounds)
17
return crypt.crypt(password, prefix + salt)
18
19
20
if __name__ == '__main__':
21
if len(sys.argv) > 1:
22
print sha512_crypt(sys.argv[1], rounds=65635)
23
else:
24
print "Argument is missing, <password>"
Copied!
Finally you have to put your username at the begining of the hash with ":" separator test:$6$rounds=65535$d07dnv4N$QeErsDT9Mz.ZoEPXW3dwQGL7tzwRz.eOrTBepIwfGEwdUAYSy/NirGoOaNyPx8lqiR6DYRSsDzVvVbhP4Y9wf0
For example, test is the username and $6$rounds=65535$d07dnv4N$QeErsDT9Mz.ZoEPXW3dwQGL7tzwRz.eOrTBepIwfGEwdUAYSy/NirGoOaNyPx8lqiR6DYRSsDzVvVbhP4Y9wf0 is the hash for test (the password is identical to the username in this example).

proxy_auth: "*"

proxy_auth: "*"
Delegated authentication. Trust that a reverse proxy has taken care of authenticating the request and has written the resolved user name into the X-Forwarded-User header. The value "*" in the example, will let this rule match any username value contained in the X-Forwarded-User header.
If you are using this technique for authentication using our Kibana plugins, don't forget to add this snippet to conf/kibana.yml:
readonlyrest_kbn.proxy_auth_passthrough: true
So that Kibana will forward the necessary headers to Elasticsearch.

users

users: ["root", "*@mydomain.com"]
Limit access to of specific users whose username is contained or matches the patterns in the array. This rule is independent from the authentication mathod chosen, so it will work well in conjunction LDAP, JWT, proxy_auth, and all others.
For example:
1
readonlyrest:
2
access_control_rules:
3
- name: "JWT auth for viewer role, limited to certain usernames"
4
kibana_access: ro
5
users: ["root", "*@mydomain.com"]
6
jwt_auth:
7
name: "jwt_provider_1"
8
roles: ["viewer"]
Copied!

groups

groups: ["group1", "group2"]
Limit access to members of specific user groups. The members and groups are defined in users section.
The single entry inside the users section tells us that:
    a given member with a username matching one of patterns in the username array ...
    belongs to local groups listed in the groups array (example 1 & 2 below) OR belongs to local groups that are result of detailed mapping between local group name and external groups (example 3 below)
    when he can be authenticated or/and authorized by auth rule(s)
In general it looks like this:
1
users:
2
- username: ["pattern1", "pattern2", ...]
3
groups: ["local_group1", "local_group2", ...]
4
authentication_rule: ...
5
6
- username: ["pattern1", "pattern2", ...]
7
groups: ["local_group1", "local_group2", ...]
8
authentication_rule: ...
9
authorization_rule: ...
10
11
- username: ["pattern1", "pattern2", ...]
12
groups:
13
- local_group1: ["external_group1", "external_group2"]
14
- local_group2: ["external_group2"]
15
authentication_with_authorization_rule: ... # `ldap_auth` or `jwt_auth` or `ror_kbn_auth`
Copied!
For details see User management .

session_max_idle

session_max_idle: 1h
⚠️DEPRECATED Browser session timeout (via cookie). Example values 1w (one week), 10s (10 seconds), 7d (7 days), etc. NB: not available for Elasticsearch 2.x.

ldap_authentication

simple version: ldap_authentication: ldap1
extended version:
1
ldap_authentication:
2
name: ldap1
3
cache_ttl: 10 sec
Copied!
It handles LDAP authentication only using the configured LDAP connector (here ldap1). Check the LDAP connector section to see how to configure the connector.

ldap_authorization

1
ldap_authorization:
2
name: "ldap1"
3
groups: ["group3"]
4
cache_ttl: 10 sec
Copied!
It handles LDAP authorization only using the configured LDAP connector (here ldap1). It matches when previously authenticated user has groups in LDAP and when he belongs to at least one of the configured groups. Check the LDAP connector section to see how to configure the connector.

ldap_auth

1
ldap_auth:
2
name: "ldap1"
3
groups: ["group3"]
Copied!
It handles both authentication and authorization using the configured LDAP connector (here ldap1). The same functionality can be achieved using the two rules described below:
1
ldap_authentication: ldap1
2
ldap_authorization:
3
name: "ldap1"
4
groups: ["group3"]
Copied!
See the dedicated LDAP section

jwt_auth

See below, the dedicated JSON Web Tokens section

external-basic-auth

Used to delegate authentication to another server that supports HTTP Basic Auth. See below, the dedicated External BASIC Auth section

groups_provider_authorization

Used to delegate groups resolution for a user to a JSON microservice. See below, the dedicated Groups Provider Authorization section

ror_kbn_auth

For Enterprise customers only, required for SAML authentication.
1
readonlyrest:
2
access_control_rules:
3
4
- name: "ReadonlyREST Enterprise instance #1"
5
ror_kbn_auth:
6
name: "kbn1"
7
roles: ["SAML_GRP_1"]
8
9
- name: "ReadonlyREST Enterprise instance #2"
10
ror_kbn_auth:
11
name: "kbn2"
12
13
ror_kbn:
14
- name: kbn1
15
signature_key: "shared_secret_kibana1" # <- use environmental variables for better security!
16
17
- name: kbn2
18
signature_key: "shared_secret_kibana2" # <- use environmental variables for better security!
Copied!
This authentication and authorization connector represents the secure channel (based on JWT tokens) of signed messages necessary for our Enterprise Kibana plugin to securely pass back to ES the username and groups information coming from browser-driven authentication protocols like SAML
Continue reading about this in the kibana plugin documentation, in the dedicated SAML section

Ancillary rules

verbosity

verbosity: error
Don't spam elasticsearch log file printing log lines for requests that match this block. Defaults to info.

Audit & Troubleshooting

The main issues seen in support cases:
    Bad ordering or ACL blocks. Remember that the ACL is evaluated sequentially, block by block. And the first block whose rules all match is accepted.
    Users don't know how to read the HIS field in the logs, which instead is crucial because it contains a trace of the evaluation of rules and blocks.
    LDAP configuration: LDAP is tricky to configure in any system. Configure ES root logger to DEBUG editing $ES_PATH_CONF/config/log4j2.properties to see a trace of the LDAP messages.

Interpreting logs