Comment on page
For Elasticsearch
ReadonlyREST is a light weight Elasticsearch plugin that adds encryption, authentication, authorization and access control capabilities to Elasticsearch embedded REST API. The core of this plugin is an ACL engine that checks each incoming request through a sequence of rules a bit like a firewall. There are a dozen rules that can be grouped in sequences of blocks and form a powerful representation of a logic chain.
The Elasticsearch plugin known as
ReadonlyREST Free
is released under the GPLv3 license, or alternatively, a commercial license (see ReadonlyREST Embedded) and lays the technological foundations for the companion Kibana plugin which is released in two versions: ReadonlyREST PRO and ReadonlyREST Enterprise.Unlike the Elasticsearch plugin, the Kibana plugins are commercial only. But rely on the Elasticsearch plugin in order to work.
In this document we are going to describe how to operate the Elasticsearch plugin in all its features. Once installed, this plugin will greatly extend the Elasticsearch HTTP API (port 9200), adding numerous extra capabilities:
- Encryption: transform the Elasticsearch API from HTTP to HTTPS
- Authentication: require credentials
- Authorization: declare groups of users, permissions and partial access to indices.
- Access control: complex logic can be modeled using an ACL (access control list) written in YAML.
- Audit events: a trace of the access requests can be logged to file or index (or both).
The following diagram models an instance of Elasticsearch with the ReadonlyREST plugin installed, and configured with SSL encryption and an ACL with at least one "allow" type ACL block.

readonlyrest request processing diagram
- 1.The User Agent (i.e. cURL, Kibana) sends a search request to Elasticsearch using the port 9200 and the HTTPS URL schema.
- 2.The HTTPS filter in ReadonlyREST plugin unwraps the SSL layer and hands over the request to Elasticsearch HTTP stack
- 3.The HTTP stack in Elasticsearch parses the HTTP request
- 4.The HTTP handler in Elasticsearch extracts the indices, action, request type and creates a
SearchRequest
(internal Elasticsearch format). - 5.The SearchRequest goes through the ACL (access control list), external systems like LDAP can be asynchronously queried, and an exit result is eventually produced.
- 6.The exit result is used by the audit event serializer, to write a record to index and/or Elasticsearch log file
- 7.If no ACL block was matched, or if a
type: forbid
block was matched, ReadonlyREST does not forward the search request to the search engine, and creates an "unauthorized" HTTP response. - 8.In case the ACL matched an
type: allow
block, the request is forwarded to the search engine - 9.The Elasticsearch code creates a search response containing the results of the query10.The search response is converted to an HTTP response by the Elasticsearch code
- 10.The HTTP response flows back to ReadonlyREST's HTTPS filter and to the User agent
To install ReadonlyREST plugin for Elasticsearch:
From the official download page. Select your Elasticsearch version and send yourself a link to the compatible ReadonlyREST zip file.
bin/elasticsearch-plugin install file:///tmp/readonlyrest-X.Y.Z_esW.Q.U.zip
Notice how we need to type in the format
file://
+ absolute path (yes, with three slashes).@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: plugin requires additional permissions @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
When prompted about additional permissions, answer y.
If you are using Elasticsearch 8.0.x or newer, you need an extra post-installation step. Depending on the Elasticsearch version, this command might tweak the main Elasticsearch installation files and/or copy some jars to
plugins/readonlyrest
directory.# Patch ES
$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar patch
⚠️IMPORTANT: for Elasticsearch 8.3.x or newer, the patching operation requires
root
user privileges.You can verify if Elasticsearch was correctly patched using command
verify
:$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar verify
Please note that the tool assumes that you run it from the root of your ES installation directory or the default installation directory is
/usr/share/elasticsearch
. But if you want or need, you can instruct it where your Elasticsearch is installed by executing one of tool's command with --es-path
parameter:$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar patch --es-path /my/custom/path/to/es/folder
or
$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar verify --es-path /my/custom/path/to/es/folder
NB: In case of any problems with the
ror-tools
, please call:$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar --help
Create and edit the
readonlyrest.yml
settings file in the same directory where elasticsearch.yml
is found: vim $ES_PATH_CONF/conf/readonlyrest.yml
Now write some basic settings, just to get started. In this example we are going to tell ReadonlyREST to require HTTP Basic Authentication for all the HTTP requests, and return
401 Unauthorized
otherwise.readonlyrest:
access_control_rules:
- name: "Require HTTP Basic Auth"
type: allow
auth_key: user:password
(applies to ES 6.4.0 or greater)
ReadonlyREST and X-Pack security module can't run together, so the latter needs to be disabled.
Edit
elasticsearch.yml
and append xpack.security.enabled: false
. vim $ES_PATH_CONF/conf/elasticsearch.yml
bin/elasticsearch
or:
service start elasticsearch
Depending on your environment.
Now you should be able to see the logs and ReadonlyREST related lines like the one below:
[2018-09-18T13:56:25,275][INFO ][o.e.p.PluginsService ] [c3RKGFJ] loaded plugin [readonlyrest]
The following command should succeed, and the response should show a status code 200.
curl -vvv -u user:password "http://localhost:9200/_cat/indices"
The following command should not succeed, and the response should show a status code 401
curl -vvv "http://localhost:9200/_cat/indices"
To upgrade ReadonlyREST for Elasticsearch:
Either kill the process manually, or use:
service stop elasticsearch
depending on your environment.
If you are using Elasticsearch 8.0.x or newer, you need an extra pre-uninstallation step. This will remove all previously copied jars from ROR's installation directory.
# Unpatch ES
$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar unpatch
You can verify if Elasticsearch was correctly unpatched using command
verify
:$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar verify
NB: In case of any problems with the
ror-tools
, please call:$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar --help
bin/elasticsearch-plugin remove readonlyrest
bin/elasticsearch-plugin install file://<download_dir>/readonlyrest-<ROR_VERSION>_es<ES_VERSION>.zip
e.g.
bin/elasticsearch-plugin install file:///tmp/readonlyrest-1.16.15_es6.1.1.zip
If you are using Elasticsearch 8.0.x or newer, you need an extra post-installation step. Depending on the Elasticsearch version, this command might tweak the main Elasticsearch installation files and/or copy some jars to
plugins/readonlyrest
directory.# Patch ES
$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar patch
⚠️IMPORTANT: for Elasticsearch 8.3.x or newer, the patching operation requires
root
user privileges.You can verify if Elasticsearch was correctly patched using command
verify
:$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar verify
NB: In case of any problems with the
ror-tools
, please call:$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar --help
bin/elasticsearch
or:
service start elasticsearch
Depending on your environment.
Now you should be able to see the logs and ReadonlyREST related lines like the one below:
[2018-09-18T13:56:25,275][INFO ][o.e.p.PluginsService ] [c3RKGFJ] loaded plugin [readonlyrest]
Either kill the process manually, or use:
service stop elasticsearch
depending on your environment.
If you are using Elasticsearch 8.0.x or newer, you need an extra pre-uninstallation step. This will remove all previously copied jars from ROR's installation directory.
# Unpatch ES
$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar unpatch
You can verify if Elasticsearch was correctly unpatched using command
verify
:$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar verify
NB: In case of any problems with the
ror-tools
, please call:$ jdk/bin/java -jar plugins/readonlyrest/ror-tools.jar --help
bin/elasticsearch-plugin remove readonlyrest
bin/elasticsearch
or:
service start elasticsearch
Depending on your environment.
Unless some advanced features are being used (see below),this Elasticsearch plugin operates like a lightweight, stateless filter glued in front of Elasticsearch HTTP API. Therefore it's sufficient to install the plugin only in the nodes that expose the HTTP interface (port 9200).
Installing ReadonlyREST in a dedicated node has numerous advantages:
- No need to restart all nodes, only the one you have installed the plugin into.
- No need to restart all nodes for updating the security settings
- No need to restart all nodes when a security update is out
- Less complexity on the actual cluster nodes.
For example, if we want to move to HTTPS all the traffic coming from Logstash into a 9 nodes Elasticsearch cluster which has been running stable in production for a while, it's not necessary to install ReadonlyREST plugin in all the nodes.
Creating a dedicated, lightweight ES node where to install ReadonlyREST:
- 1.
- 2.Create a new, lightweight, dedicated node without shards, nor master eligibility.
- 3.
- 4.Configure Logstash to connect to the new node directly in HTTPS.
⚠️IMPORTANT By default when
fields
rule is used, it's required to install ReadonlyREST plugin in all the data nodes.The core of this plugin is an ACL (access control list). A logic structure very similar to the one found in firewalls. The ACL is part of the plugin configuration, and it's written in YAML.
- The ACL is composed of an ordered sequence of named blocks
- Each block contains some rules, and a policy (forbid or allow)
- HTTP requests run through the blocks, starting from the first,
- The first block that satisfies all the rules decides if to forbid or allow the request (according to its policy).
- If none of the block match, the request is rejected
⚠️IMPORTANT: The ACL blocks are evaluated sequentially, therefore the ordering of the ACL blocks is crucial. The order of the rules inside an ACL block instead, is irrelevant.
readonlyrest:
access_control_rules:
- name: "Block 1 - only Logstash indices are accessible"
type: allow # <-- default policy type is "allow", so this line could be omitted
indices: ["logstash-*"] # <-- This is a rule
- name: "Block 2 - Blocking everything from a network"
type: forbid
hosts: ["10.0.0.0/24"] # <-- this is a rule
An Example of Access Control List (ACL) made of 2 blocks.
The YAML snippet above, like all of this plugin's settings should be saved inside the
readonlyrest.yml
file. Create this file on the same path where elasticsearch.yml
is found.TIP: If you are a subscriber of the PRO or Enterprise Kibana plugin, you can edit and refresh the settings through a GUI. For more on this, see the documentation for ReadonlyREST plugin for Kibana.
An SSL encrypted connection is a prerequisite for secure exchange of credentials and data over the network. To make use of it you need to have certificate and private key. Letsencrypt certificates work just fine (see tutorial below). Before ReadonlyREST 1.44.0 both files, certificate and private key, had to be placed inside PKCS#12 or JKS keystore. See the tutorial at the end of this section. ReadonlyREST 1.44.0 or newer supports using PEM files directly, without the need to use a keystore.
ReadonlyREST can be configured to encrypt network traffic on two independent levels:
- 1.HTTP (port 9200)
- 2.Internode communication - transport module (port 9300)
An Elasticsearch node with ReadonlyREST can join an existing cluster based on native SSL fromxpack.security
module. This configuration is useful to deploy ReadonlyREST Enterprise for Kibana to an existing large production cluster without disrupting any configuration. More on this in the dedicated paragraph of this section.
It wraps connection between client and exposed REST API in SSL context, hence making it encrypted and secure. ⚠️IMPORTANT: To enable SSL for REST API, open
elasticsearch.yml
and append this one line:http.type: ssl_netty4
Now in
readonlyrest.yml
add the following settings:readonlyrest:
ssl:
keystore_file: "keystore.jks" # or keystore.p12 for PKCS#12 format
keystore_pass: readonlyrest
key_pass: readonlyrest
The keystore should be stored in the same directory with
elasticsearch.yml
and readonlyrest.yml
.This option encrypts communication between nodes forming Elasticsearch cluster.
⚠️IMPORTANT: To enable SSL for internode communication open
elasticsearch.yml
and append this one line:transport.type: ror_ssl_internode
In
readonlyrest.yml
following settings must be added (it's just example configuration presenting most important properties):readonlyrest:
ssl_internode:
keystore_file: "keystore.jks" # or keystore.p12 for PKCS#12 format
keystore_pass: readonlyrest
key_pass: readonlyrest
Similar to
ssl
for HTTP, the keystore should be stored in the same directory with elasticsearch.yml
and readonlyrest.yml
. This config must be added to all nodes taking part in encrypted communication within cluster.Internode communication with XPack nodes
It is possible to set up internode SSL between ROR and XPack nodes. It works only for ES above 6.3.
To set up cluster in such configuration you have to generate certificate for ROR node according to this description https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup.html#generate-certificates.
Generated
elastic-certificates.p12
could be then used in ROR node with such configurationreadonlyrest:
ssl_internode:
enable: true
keystore_file: "elastic-certificates.p12"
keystore_pass: [ password for generated certificate ]
key_pass: [ password for generated certificate ]
truststore_file: "elastic-certificates.p12"
truststore_pass: [ password for generated certificate ]
client_authentication: true # default: false
certificate_verification: true # certificate verification is enabled by default on XPack nodes
By default certificate verification is disabled. It means that certificate is not validated in any way, so all certificates are accepted. It is useful on local/test environment, where security is not the most important concern. On production environment it is advised to enable this option. It can be done by means of:
certificate_verification: true
under
ssl_internode
section. This option is applicable only for internode SSL.By default the client authentication is disabled. When enabled, the server asks the client about its certificate, so ES is able to verify the client's identity. It can be enabled by means of:
client_authentication: true
under
ssl
section. This option is applicable for REST API external SSL and internode SSL.Optionally, it's possible to specify a list allowed SSL protocols and SSL ciphers. Connections from clients that don't support the listed protocols or ciphers will be dropped.
readonlyrest:
ssl:
# put the keystore in the same dir with elasticsearch.yml
keystore_file: "keystore.jks"
keystore_pass: readonlyrest
key_pass: readonlyrest
allowed_protocols: [TLSv1.2]
allowed_ciphers: [TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]
ReadonlyREST will log a list of available ciphers and protocols supported by the current JVM at startup.
[2018-01-03T10:09:38,683][INFO ][t.b.r.e.SSLTransportNetty4] ROR SSL: Available ciphers: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA
[2018-01-03T10:09:38,684][INFO ][t.b.r.e.SSLTransportNetty4] ROR SSL: Restricting to ciphers: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
[2018-01-03T10:09:38,684][INFO ][t.b.r.e.SSLTransportNetty4] ROR SSL: Available SSL protocols: TLSv1,TLSv1.1,TLSv1.2
[2018-01-03T10:09:38,685][INFO ][t.b.r.e.SSLTransportNetty4] ROR SSL: Restricting to SSL protocols: TLSv1.2
[2018-0
ReadonlyREST allows using custom truststore, replacing (provided by JRE) default one. Custom truststore can be set with:
truststore_file: "truststore.jks"
truststore_pass: truststorepass
under
ssl
or ssl_internode
section. This option is applicable for both ssl modes - external ssl and internode ssl. The truststore should be stored in the same directory with elasticsearch.yml
and readonlyrest.yml
(like keystore). When not specified, ReadonlyREST uses default truststore.If you are using ReadonlyREST 1.44.0 or newer then you are able to use PEM files directly without the need of placing them inside a keystore or truststore.
To use PEM files instead of keystore file, use such configuration instead of
keystore_file
, keystore_pass
, key_pass
fields:server_certificate_key_file: private_key.pem
server_certificate_file: cert_chain.pem
To use PEM file instead of truststore file, use such configuration instead of
truststore_file
, truststore_pass
fields:client_trusted_certificate_file: trusted_certs.pem
We are going to show how to first add all the certificates and private key into PKCS#12 keystore, and then (optionally) converting it to JKS keystore. ReadonlyREST supports both formats.
⚠️IMPORTANT: if you are using ReadonlyREST in version above 1.44.0 then you don't have to create a keystore. You are able to use PEM files directly using the description above.
This tutorial can be a useful example on how to use certificates from other providers.
1. Create keys
./letsencrypt-auto certonly --standalone -d DOMAIN.TLD -d DOMAIN_2.TLD --email [email protected]
Now change to the directory (probably /etc/letsencrypt/live/DOMAIN.tld) where the certificates were created.
2. Create a PKCS12 keystore with the full chain and private key
openssl pkcs12 -export -in fullchain.pem -inkey privkey.pem -out pkcs.p12 -name NAME
3. Convert PKCS12 to JKS Keystore (Optional)
The STORE_PASS is the password which was entered in step 2) as a password for the pkcs12 file.
keytool -importkeystore -deststorepass PASSWORD_STORE -destkeypass PASSWORD_KEYPASS -destkeystore keystore.jks -srckeystore pkcs.p12 -srcstoretype PKCS12 -srcstorepass STORE_PASS -alias NAME
If you happen to get a
java.io.IOException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded
, you have probably forgotten to enter the correct password from step 2.Each incoming request to the Elasticsearch node passes to the installed plugin. During Elasticsearch node startup, the plugin rejects incoming requests until it starts. The plugin rejects such requests with the
403
forbidden responses by default. To overwrite this behavior, append to elasticsearch.ymlreadonlyrest.not_started_response_code: 503
readonlyrest.failed_to_start_response_code: 503
readonlyrest.not_started_response_code
- HTTP code returned when the plugin does not start yet. Possible values are 403
(default) and 503
.readonlyrest.failed_to_start_response_code
- HTTP code returned when the plugin failed to start (e.g. by malformed ACL). Possible values are 403
(default) and 503
.Every block must have at least the
name
field, and optionally a type
field valued either "allow" or "forbid". If you omit the type
, your block will be treated as type: allow
by default.Keep in mind that ReadonlyREST ACL is a white list, so by default all request are blocked, unless you specify a block of rules that allows all or some requests.
name
will appear in logs, so keep it short and distinctive.type
can be eitherallow
orforbid
. Can be omitted, default isallow
.
- name: "Block 1 - Allowing anything from localhost"
type: allow
# In real life now you should increase the specificity by adding rules here (otherwise this block will allow all requests!)
Example: the simplest example of an allow block.
⚠️IMPORTANT: if no blocks are configured, ReadonlyREST rejects all requests.
ReadonlyREST access control rules can be divided into the following categories:
- Authentication & Authorization rules
- Elasticsearch level rules
- Kibana-related rules
- HTTP level rules
- Network level rules
Please refrain from using HTTP level rules to protect certain indices or limit what people can do to an index. The level of control at this level is really coarse, especially because Elasticsearch REST API does not always respect RESTful principles. This makes of HTTP a bad abstraction level to write ACLs in Elasticsearch all together.
The only clean and exhaustive way to implement access control is to reason about requests AFTER ElasticSearch has parsed them. Only then, the list of affected indices and the action will be known for sure. See Elasticsearch level rules.
This section contains description of rules that can be used to authenticate and/or authorize users. Most of the following rules use HTTP Basic Auth, so the credentials are passed with the
Authorization
header and they can be easily decoded when the request is intercepted by a malicious third party. Please note that this authentication method is secure only if SSL is enabled.auth_key: sales:p455wd
It's an authentication rule that accepts HTTP Basic Auth. Configure this value in clear text. Clients will need to provide the header e.g.
Authorization: Basic c2FsZXM6cDQ1NXdk
where "c2FsZXM6cDQ1NXdk" is Base64 for "sales:p455wd".⚠️IMPORTANT: this rule is handy just for tests, replace it with another rule that hashes credentials, like:
auth_key_sha512
, or auth_key_unix
.auth_key_sha512: 280ac6f...94bf9
The authentication rule that accepts HTTP Basic Auth. The value is a string like
username:password
hashed in SHA512. Clients will need to provide the usual Authorization header.There are also available other rules with less secure SHA algorithms
auth_key_sha256
and auth_key_sha1
.The rules support also alternative syntax, where only password is hashed, eg:
auth_key_sha512: "admin:280ac6f...94bf9"
In the example below
admin
is the username and 280ac6f...94bf9
is the hashed secret.auth_key_pbkdf2: "KhIxF5EEYkH5GPX51zTRIR4cHqhpRVALSmTaWE18mZEL2KqCkRMeMU4GR848mGq4SDtNvsybtJ/sZBuX6oFaSg=="
# logstash:logstashauth_key_pbkdf2: "logstash:JltDNAoXNtc7MIBs2FYlW0o1f815ucj+bel3drdAk2yOufg2PNfQ51qr0EQ6RSkojw/DzrDLFDeXONumzwKjOA=="
# logstash:logstashThe authentication rule that accepts HTTP Basic Auth. The value is hashed in the same way as it's done in
auth_key_sha512
rule, but it uses PBKDF2 key derivation function. At the moment there is no way to configure it, so during the hash generation, the user has to take into consideration the following PBKDF2 input parameters values:Input parameter | Value | Comment |
---|---|---|
Pseudorandom function | HmacSHA512 | |
Salt | use hashed value as a salt | eg. hashed value = logstash:logstash , use logstash:logstash as the salt |
Iterations count | 10000 | |
Derived key length | 512 | bits |
auth_key_unix: test:$6$rounds=65535$d07dnv4N$QeErsDT9Mz.ZoEPXW3dwQGL7tzwRz.eOrTBepIwfGEwdUAYSy/NirGoOaNyPx8lqiR6DYRSsDzVvVbhP4Y9wf0 # Hashed for "test:test"
⚠️IMPORTANT this hashing algorithm is very CPU intensive, so we implemented a caching mechanism around it. However, this will not protect Elasticsearch from a DoS attack with a high number of requests with random credentials.
This is authentication rule that is based on
/etc/shadow
file syntax.If you configured sha512 encryption with 65535 rounds on your system the hash in /etc/shadow for the account
test:test
will be test:$6$rounds=65535$d07dnv4N$QeErsDT9Mz.ZoEPXW3dwQGL7tzwRz.eOrTBepIwfGEwdUAYSy/NirGoOaNyPx8lqiR6DYRSsDzVvVbhP4Y9wf0
readonlyrest:
access_control_rules:
- name: Accept requests from users in group team1 on index1
groups: ["team1"]
indices: ["index1"]
users:
- username: test
auth_key_unix: test:$6$rounds=65535$d07dnv4N$QeErsDT9Mz.ZoEPXW3dwQGL7tzwRz.eOrTBepIwfGEwdUAYSy/NirGoOaNyPx8lqiR6DYRSsDzVvVbhP4Y9wf0 #test:test
groups: ["team1"]
You can generate the hash with mkpasswd Linux command, you need whois package
apt-get install whois
(or equivalent)mkpasswd -m sha-512 -R 65534
Also you can generate the hash with a python script (works on Linux):
#!/usr/bin/python
import crypt
import random
import sys
import string
def sha512_crypt(password, salt=None, rounds=None):
if salt is None:
rand = random.SystemRandom()
salt = ''.join([rand.choice(string.ascii_letters + string.digits)
for _ in range(8)])
prefix = '$6$'
if rounds is not None:
rounds = max(1000, min(999999999, rounds or 5000))
prefix += 'rounds={0}$'.format(rounds)
return crypt.crypt(password, prefix + salt)
if __name__ == '__main__':
if len(sys.argv) > 1:
print sha512_crypt(sys.argv[1], rounds=65635)
else:
print "Argument is missing, <password>"
Finally you have to put your username at the beginning of the hash with ":" separator
test:$6$rounds=65535$d07dnv4N$QeErsDT9Mz.ZoEPXW3dwQGL7tzwRz.eOrTBepIwfGEwdUAYSy/NirGoOaNyPx8lqiR6DYRSsDzVvVbhP4Y9wf0
For example,
test
is the username and $6$rounds=65535$d07dnv4N$QeErsDT9Mz.ZoEPXW3dwQGL7tzwRz.eOrTBepIwfGEwdUAYSy/NirGoOaNyPx8lqiR6DYRSsDzVvVbhP4Y9wf0
is the hash for test
(the password is identical to the username in this example).token_authentication:
token: "Bearer AAEAAWVsYXN0aWMva2liYW5hL3Rva2Vu" # required, expected HTTP header content containing the token
username: admin # required, the username used after successful authentication
header: x-custom-authorization # optional, defaults to 'Authorization`
The authentication rule that accepts token sent in the HTTP header (the
Authorization
header is the default if no custom header name is configured in the token_authentication.header
field).For the HTTP header in the format
Authorization: Bearer AAEAAWVsYXN0aWMva2liYW5hL3Rva2Vu
, the Bearer AAEAAWVsYXN0aWMva2liYW5hL3Rva2Vu
is the token value set in the token
field.proxy_auth: "*"
Delegated authentication. Trust that a reverse proxy has taken care of authenticating the request and has written the resolved user name into the
X-Forwarded-User
header. The value "*" in the example, will let this rule match any username value contained in the X-Forwarded-User
header.If you are using this technique for authentication using our Kibana plugins, don't forget to add this snippet to
conf/kibana.yml
:readonlyrest_kbn.proxy_auth_passthrough: true
So that Kibana will forward the necessary headers to Elasticsearch.
groups_or: ["group1", "group2"]
The ACL block will match when the user belongs to any of the specified groups. The information about what users belong to what groups is defined in the
users
section, typically situated after the ACL, further down in the YAML.In the
users
section, each entry tells us that:- A given user with a username matching one of patterns in the
username
array ... - belongs to the local groups listed in the
groups
array (example 1 & 2 below) OR belongs to local groups that are result of "detailed group mapping" between local group name and external groups (example 3 below). - when they can be authenticated and (if authorization rule is present) authorized by the present rule(s).
In general it looks like this:
...
- name: "ACL block with groups rule"
indices: [x, y]
groups_or: ["local_group1"] # this group name is defined in the "users" section
users:
- username: ["pattern1", "pattern2", ...]
groups_or: ["local_group1", "local_group2", ...]
<any_authentication_rule>: ...
- username: ["pattern1", "pattern2", ...]
groups_or: ["local_group1", "local_group2", ...]
<any_authentication_rule>: ...
<optionally_any_authorization_rule>: ...
- username: ["pattern1", "pattern2", ...]
groups_or:
- local_group1: ["external_group1", "external_group2"]
- local_group2: ["external_group2"]
<authentication_with_authorization_rule>: ... # `ldap_auth` or `jwt_auth` or `ror_kbn_auth`
groups_and: ["group1", "group2"]
This rule is identical to the above defined
groups
rule, but this time ALL the groups listed in the array are required (boolean AND logic), as opposed to at least one (boolean OR logic) of the groups
rule.simple version:
ldap_authentication: ldap1
extended version:
ldap_authentication:
name: ldap1
cache_ttl: 10 sec
It handles LDAP authentication only using the configured LDAP connector (here
ldap1
). Check the LDAP connector section to see how to configure the connector.ldap_authorization:
name: "ldap1"
groups: ["group3"]
cache_ttl: 10 sec
It handles LDAP authorization only using the configured LDAP connector (here
ldap1
). It matches when previously authenticated user has groups in LDAP and when he belongs to at least one of the configured groups
(OR logic). Alternatively, groups_and
can be used to require users belong to all the listed groups (AND logic). Check the LDAP connector section to see how to configure the connector.Shorthand rule that combines
ldap_authentication
and ldap_authorization
rules together. It handles both authentication and authorization using the configured LDAP connector (here ldap1
).ldap_auth:
name: "ldap1"
groups: ["group1", "group2"]
The same functionality can be achieved using the two rules described below:
ldap_authentication: ldap1
ldap_authorization:
name: "ldap1"
groups: ["group1", "group2"] # match when user belongs to at least one group
In both
ldap_auth
and ldap_authorization
, the groups
clause can be replaced by group_and
to require the valid LDAP user must belong to all the listed groups:ldap_auth:
name: "ldap1"
groups_and: ["group1", "group2"] # match when user belongs to ALL listed groups
Or equivalently:
ldap_authentication: ldap1
ldap_authorization:
name: "ldap1"
groups_and: ["group1", "group2"] # match when user belongs to ALL listed groups
See below, the dedicated JSON Web Tokens section. It's an authentication and authorization rule at the same time.
Used to delegate authentication to another server that supports HTTP Basic Auth. See below, the dedicated External BASIC Auth section
Used to delegate groups resolution for a user to a JSON microservice. See below, the dedicated Groups Provider Authorization section
For Enterprise customers only, required for SAML authentication. From ROR's perspective it authenticates and authorize users.
readonlyrest:
access_control_rules:
- name: "ReadonlyREST Enterprise instance #1"
ror_kbn_auth:
name: "kbn1"
groups: ["SAML_GRP_1", "SAML_GRP_2"] # <- use this field when a user should belong to at least one of the configured groups
- name: "ReadonlyREST Enterprise instance #1 - two groups required"
ror_kbn_auth:
name: "kbn1"
groups_and: ["SAML_GRP_1", "SAML_GRP_2"] # <- use this field when a user should belong to all configured groups
- name: "ReadonlyREST Enterprise instance #2"
ror_kbn_auth:
name: "kbn2"
ror_kbn:
- name: kbn1
signature_key: "shared_secret_kibana1" # <- use environmental variables for better security!
- name: kbn2
signature_key: "shared_secret_kibana2" # <- use environmental variables for better security!
This authentication and authorization connector represents the secure channel (based on JWT tokens) of signed messages necessary for our Enterprise Kibana plugin to securely pass back to ES the username and groups information coming from browser-driven authentication protocols like SAML
users: ["root", "*@mydomain.com"]
It's NOT an authentication rule, but it can be used to limit access to of specific users whose username is contained or matches the patterns in the array. This rule is independent from the authentication method chosen, so it will work well in conjunction LDAP, JWT, proxy_auth, and all others. The rule won't be matched if there won't be an authenticated user by some other authentication rule (it means that to make sense, it should always be used in conjunction with some authentication rule).
For example:
readonlyrest:
access_control_rules:
- name: "JWT auth for viewer group (role), limited to certain usernames"
kibana:
access: ro
users: ["root", "*@mydomain.com"]
jwt_auth:
name: "jwt_provider_1"
groups: ["viewer"]
The
kibana
rule underpins all ROR Kibana-related settings that may be needed to provide great user experience.kibana:
access: ro # required
index: ".kibana_custom_index" # optional
template_index: ".kibana_template" # optional
hide_apps: [ "Security", "Enterprise Search"] # optional
allowed_api_paths: # optional
- "^/api/spaces/.*$"
- http_method: POST
http_path: "^/api/saved_objects/.*$"
metadata:
dept: "@{jwt:tech.beshu.department}"
alert_message: "Dear @{acl.current_group} users, you are viewing dashboards for indices @{acl:available_groups}_logstash-*"
The rule consists of several sub-rules:
access
Enables the minimum set of Elasticsearch
actions
necessary for browsers to sustain a Kibana session, and rejects any other unrelated actions.This "macro" rule allows the minimum set of actions necessary for a browser to use Kibana. It allows a set of actions towards the designated kibana index (see
kibana.index
), plus a stricter subset of read-only actions towards other indices, which are considered "data indices".The idea is that with one single sub-rule we allow the bare minimum set of index+action combinations necessary to support a Kibana browsing session.
Possible access levels:
ro_strict
: the browser has a read-only view on Kibana dashboards and settings and all other indices.ro
: some write requests can go through to thekibana_index
index so that the UI state in "Discover" can be saved and new short urls can be created.rw
: some more requests will be allowed towards thekibana_index
index only, so Kibana dashboards and settings can be modified.admin
: likerw
, but has additional permissions to save security settings in the ReadonlyREST PRO/Enterprise appunrestricted
: no action is restricted.
NB: The
admin
access level does not mean the user will be allowed to access all indices/actions. It's just like "rw" with settings changes privileges. If you truly require unrestricted access for your Kibana user, including ReadonlyREST PRO/Enterprise app, set kibana.access: unrestricted
. You can use this rule with the users
rule to restrict access to selected admins.This sub-rule is often used with the
indices
rule, to limit the data a user is able to see represented on the dashboards. In that case do not forget to allow the custom kibana index in the indices
rule!index
Default value is
.kibana
Specify to what index we expect Kibana to attempt to read/write its settings (use this together with
kibana.index
setting in the kibana.yml
file)This value directly affects how
kibana.access
works because at all the access levels (yes, even admin), kibana.access
sub-rule will NOT match any write request in indices that are not the designated kibana index.If used in conjunction with ReadonlyREST Enterprise, this rule enables multi tenancy, because in ReadonlyREST, a tenancy is identified with a set of Kibana configurations, which are by design collected inside a kibana index (default:
.kibana
).⚠️IMPORTANT When you use the
kibana
rule together with the indices
rule in the same block, you don't have to explicitly allow the Kibana-related indices in the list of allowed indices of the indices
rule. ROR will do it for you automatically.Example: