For Elasticsearch
Last updated
Last updated
ReadonlyREST is a light-weight Elasticsearch plugin that adds encryption, authentication, authorization and access control capabilities to Elasticsearch embedded REST API. The core of this plugin is an ACL engine that checks each incoming request through a sequence of rules a bit like a firewall. There are a dozen rules that can be grouped in sequences of blocks and form a powerful representation of a logic chain.
The Elasticsearch plugin known as ReadonlyREST Free
is released under the GPLv3 license, or alternatively a commercial license (see ReadonlyREST Embedded) and lays the technological foundations for the companion Kibana plugin which is released in two versions: ReadonlyREST PRO and ReadonlyREST Enterprise.
Unlike the Elasticsearch plugin, the Kibana plugins are commercial only. But rely on the Elasticsearch plugin in order to work.
For a description of the Kibana plugins, skip to the dedicated documentation page instead.
In this document, we are going to describe how to operate the Elasticsearch plugin in all its features. Once installed, this plugin will greatly extend the Elasticsearch HTTP API (port 9200), adding numerous extra capabilities:
Encryption: transform the Elasticsearch API from HTTP to HTTPS
Authentication: require credentials
Authorization: declare groups of users, permissions and partial access to indices.
Access control: complex logic can be modeled using an ACL (access control list) written in YAML.
Audit events: a trace of the access requests can be logged to a file or index (or both).
The following diagram models an instance of Elasticsearch with the ReadonlyREST plugin installed and configured with SSL encryption and an ACL with at least one "allow" type ACL block.
The User Agent (i.e. cURL, Kibana) sends a search request to Elasticsearch using port 9200 and the HTTPS URL schema.
The HTTPS filter in the ReadonlyREST plugin unwraps the SSL layer and hands over the request to the Elasticsearch HTTP stack
The HTTP stack in Elasticsearch parses the HTTP request
The HTTP handler in Elasticsearch extracts the indices, action, request type, and creates a SearchRequest
(internal Elasticsearch format).
The SearchRequest goes through the ACL (access control list), external systems like LDAP can be asynchronously queried, and an exit result is eventually produced.
The exit result is used by the audit event serializer, to write a record to index and/or Elasticsearch log file
If no ACL block was matched, or if a type: forbid
block was matched, ReadonlyREST does not forward the search request to the search engine and creates an "unauthorized" HTTP response.
In case the ACL matches a type: allow
block, the request is forwarded to the search engine
The Elasticsearch code creates a search response containing the results of the query
The search response is converted to an HTTP response by the Elasticsearch code
The HTTP response flows back to ReadonlyREST's HTTPS filter and to the User agent
The simplest method to run Elasticsearch with the ReadonlyREST plugin is to use one of our docker images which you can find on Docker Hub:
OR with Docker Compose:
(To run the docker-compose.yml call docker compose up
)
Any of these methods, runs Elasticsearch container with ReadonlyREST with init settings.
When the service is started you can test it using curl or Postman:
You can create locally customized readonlyrest.yml
file and mount it as a docker volume. Assuming that your ROR settings file is located in /tmp/my-readonlyrest.yml
you can use it like that:
OR
To install the ReadonlyREST plugin for Elasticsearch:
From the official download page. Select your Elasticsearch version and send yourself a link to the compatible ReadonlyREST zip file.
Notice how we need to type in the format file://
+ absolute path (yes, with three slashes).
When prompted about additional permissions, answer y.
If you are using Elasticsearch 6.5.x or newer, you need an extra post-installation step. Depending on the Elasticsearch version, this command might tweak the main Elasticsearch installation files and/or copy some jars to plugins/readonlyrest
directory.
⚠️IMPORTANT: for Elasticsearch 8.3.x or newer, the patching operation requires root
user privileges.
You can verify if Elasticsearch was correctly patched using the command verify
:
Please note that the tool assumes that you run it from the root of your ES installation directory or the default installation directory is /usr/share/elasticsearch
. But if you want or need, you can instruct it where your Elasticsearch is installed by executing one of the tool's command with the --es-path
parameter:
or
NB: In case of any problems with the ror-tools
, please call:
Create and edit the readonlyrest.yml
settings file in the same directory where elasticsearch.yml
is found:
Now write some basic settings, just to get started. In this example, we are going to tell ReadonlyREST to require HTTP Basic Authentication for all the HTTP requests, and return 401 Unauthorized
otherwise.
(applies to ES 6.4.0 or greater)
ReadonlyREST and X-Pack security module can't run together, so the latter needs to be disabled.
Edit elasticsearch.yml
and append xpack.security.enabled: false
.
or:
Depending on your environment.
Now you should be able to see the logs and ReadonlyREST-related lines like the one below:
The following command should succeed, and the response should show a status code 200.
The following command should not succeed, and the response should show a status code 401
To upgrade ReadonlyREST for Elasticsearch:
Either kill the process manually, or use:
depending on your environment.
If you are using Elasticsearch 6.5.x or newer, you need an extra pre-uninstallation step. This will remove all previously copied jars from ROR's installation directory.
You can verify if Elasticsearch was correctly unpatched using the command verify
:
NB: In case of any problems with the ror-tools
, please call:
e.g.
If you are using Elasticsearch 6.5.x or newer, you need an extra post-installation step. Depending on the Elasticsearch version, this command might tweak the main Elasticsearch installation files and/or copy some jars to the plugins/readonlyrest
directory.
⚠️IMPORTANT: for Elasticsearch 8.3.x or newer, the patching operation requires root
user privileges.
You can verify if Elasticsearch was correctly patched using the command verify
:
NB: In case of any problems with the ror-tools
, please call:
or:
Depending on your environment.
Now you should be able to see the logs and ReadonlyREST-related lines like the one below:
Either kill the process manually, or use:
depending on your environment.
If you are using Elasticsearch 6.5.x or newer, you need an extra pre-uninstallation step. This will remove all previously copied jars from ROR's installation directory.
You can verify if Elasticsearch was correctly unpatched using the command verify
:
NB: In case of any problems with the ror-tools
, please call:
or:
Depending on your environment.
Unless some advanced features are being used (see below), this Elasticsearch plugin operates like a lightweight, stateless filter glued in front of Elasticsearch HTTP API. Therefore it's sufficient to install the plugin only in the nodes that expose the HTTP interface (port 9200).
Installing ReadonlyREST in a dedicated node has numerous advantages:
No need to restart all nodes, only the one you have installed the plugin into.
No need to restart all nodes to update the security settings
No need to restart all nodes when a security update is out
Less complexity on the actual cluster nodes.
For example, if we want to move to HTTPS all the traffic coming from Logstash into a 9-node Elasticsearch cluster which has been running stable in production for a while, it's not necessary to install the ReadonlyREST plugin in all the nodes.
Creating a dedicated, lightweight ES node where to install ReadonlyREST:
(Optional) disable the HTTP interface from all the existing nodes
Create a new, lightweight, dedicated node without shards, nor master eligibility.
Configure ReadonlyREST with SSL encryption in the new node
Configure Logstash to connect to the new node directly in HTTPS.
⚠️IMPORTANT By default when the fields
rule is used, it's required to install the ReadonlyREST plugin in all the data nodes.
The core of this plugin is an ACL (access control list). A logic structure very similar to the one found in firewalls. The ACL is part of the plugin configuration, and it's written in YAML.
The ACL is composed of an ordered sequence of named blocks
Each block contains some rules, and a policy (forbid or allow)
HTTP requests run through the blocks, starting from the first,
The first block that satisfies all the rules decides if to forbid or allow the request (according to its policy).
If none of the blocks is matched, the request is rejected
⚠️IMPORTANT: The ACL blocks are evaluated sequentially, therefore the ordering of the ACL blocks is crucial. The order of the rules inside an ACL block instead, is irrelevant.
An Example of the Access Control List (ACL) made of 2 blocks.
The YAML snippet above, like all of this plugin's settings should be saved inside the readonlyrest.yml
file. Create this file on the same path where elasticsearch.yml
is found.
TIP: If you are a subscriber of the PRO or Enterprise Kibana plugin, you can edit and refresh the settings through a GUI. For more on this, see the documentation for the ReadonlyREST plugin for Kibana.
An SSL-encrypted connection is a prerequisite for secure exchange of credentials and data over the network. To make use of it you need to have certificate and private key. Letsencrypt certificates work just fine (see tutorial below). Before ReadonlyREST 1.44.0 both files, certificate and private key, had to be placed inside PKCS#12 or JKS keystore. See the tutorial at the end of this section. ReadonlyREST 1.44.0 or newer supports using PEM files directly, without the need to use a keystore.
ReadonlyREST can be configured to encrypt network traffic on two independent levels:
HTTP (port 9200)
Internode communication - transport module (port 9300)
An Elasticsearch node with ReadonlyREST can join an existing cluster based on native SSL from
xpack.security
module. This configuration is useful to deploy ReadonlyREST Enterprise for Kibana to an existing large production cluster without disrupting any configuration. More on this in the dedicated paragraph of this section.
It wraps connection between client and exposed REST API in SSL context, hence making it encrypted and secure. ⚠️IMPORTANT: To enable SSL for REST API, open elasticsearch.yml
and append this one line:
Now in readonlyrest.yml
add the following settings:
The keystore should be stored in the same directory with elasticsearch.yml
and readonlyrest.yml
.
This option encrypts communication between nodes forming Elasticsearch cluster.
⚠️IMPORTANT: To enable SSL for internode communication open elasticsearch.yml
and append this one line:
In readonlyrest.yml
following settings must be added (it's just example configuration presenting most important properties):
Similar to ssl
for HTTP, the keystore should be stored in the same directory with elasticsearch.yml
and readonlyrest.yml
. This config must be added to all nodes taking part in encrypted communication within cluster.
Internode communication with XPack nodes
It is possible to set up internode SSL between ROR and XPack nodes. It works only for ES above 6.3.
To set up cluster in such configuration you have to generate certificate for ROR node according to this description https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup.html#generate-certificates.
Generated elastic-certificates.p12
could be then used in ROR node with such configuration
By default the certificate verification is disabled. It means that certificate is not validated in any way, so all certificates are accepted. It is useful on local/test environment, where security is not the most important concern. On production environment it is advised to enable this option. It can be done by means of:
under ssl_internode
section. This option is applicable only for internode SSL.
By default the hostname verification is disabled. This means that hostname or IP address is not verified to match the names in the certificate. To enable hostname verification add the following lines in the ssl_internode
section:
By default the client authentication is disabled. When enabled, the server asks the client about its certificate, so ES is able to verify the client's identity. It can be enabled by means of:
under ssl
section. This option is applicable for REST API external SSL and internode SSL.
Optionally, it's possible to specify a list allowed SSL protocols and SSL ciphers. Connections from clients that don't support the listed protocols or ciphers will be dropped.
ReadonlyREST will log a list of available ciphers and protocols supported by the current JVM at startup.
ReadonlyREST allows using custom truststore, replacing (provided by JRE) default one. Custom truststore can be set with:
under ssl
or ssl_internode
section. This option is applicable for both ssl modes - external ssl and internode ssl. The truststore should be stored in the same directory with elasticsearch.yml
and readonlyrest.yml
(like keystore). When not specified, ReadonlyREST uses default truststore.
If you are using ReadonlyREST 1.44.0 or newer then you are able to use PEM files directly without the need of placing them inside a keystore or truststore.
To use PEM files instead of keystore file, use such configuration instead of keystore_file
, keystore_pass
, key_pass
fields:
To use PEM file instead of truststore file, use such configuration instead of truststore_file
, truststore_pass
fields:
We are going to show how to first add all the certificates and private key into PKCS#12 keystore, and then (optionally) converting it to JKS keystore. ReadonlyREST supports both formats.
⚠️IMPORTANT: if you are using ReadonlyREST in version above 1.44.0 then you don't have to create a keystore. You are able to use PEM files directly using the description above.
This tutorial can be a useful example on how to use certificates from other providers.
1. Create keys
Now change to the directory (probably /etc/letsencrypt/live/DOMAIN.tld) where the certificates were created.
2. Create a PKCS12 keystore with the full chain and private key
3. Convert PKCS12 to JKS Keystore (Optional)
The STORE_PASS is the password which was entered in step 2) as a password for the pkcs12 file.
If you happen to get a java.io.IOException: failed to decrypt safe contents entry: javax.crypto.BadPaddingException: Given final block not properly padded
, you have probably forgotten to enter the correct password from step 2.
(Credits for the original JKS tutorial to Maximilian Boehm)
Each incoming request to the Elasticsearch node passes to the installed plugin. During Elasticsearch node startup, the plugin rejects incoming requests until it starts. The plugin rejects such requests with the 403
forbidden responses by default. To overwrite this behavior, append to elasticsearch.yml
readonlyrest.not_started_response_code
- HTTP code returned when the plugin does not start yet. Possible values are 403
(default) and 503
.
readonlyrest.failed_to_start_response_code
- HTTP code returned when the plugin failed to start (e.g. by malformed ACL). Possible values are 403
(default) and 503
.
Every block must have at least the name
field, and optionally a type
field valued either "allow" or "forbid". If you omit the type
, your block will be treated as type: allow
by default.
Keep in mind that ReadonlyREST ACL is a white list, so by default all request are blocked, unless you specify a block of rules that allows all or some requests.
name
will appear in logs, so keep it short and distinctive.
type
can be either allow
or forbid
. Can be omitted, default is allow
.
Example: the simplest example of an allow block.
⚠️IMPORTANT: if no blocks are configured, ReadonlyREST rejects all requests.
ReadonlyREST access control rules can be divided into the following categories:
Authentication & Authorization rules
Elasticsearch level rules
Kibana-related rules
HTTP level rules
Network level rules
Please refrain from using HTTP level rules to protect certain indices or limit what people can do to an index. The level of control at this level is really coarse, especially because Elasticsearch REST API does not always respect RESTful principles. This makes of HTTP a bad abstraction level to write ACLs in Elasticsearch all together.
The only clean and exhaustive way to implement access control is to reason about requests AFTER ElasticSearch has parsed them. Only then, the list of affected indices and the action will be known for sure. See Elasticsearch level rules.
This section contains description of rules that can be used to authenticate and/or authorize users. Most of the following rules use HTTP Basic Auth, so the credentials are passed with the Authorization
header and they can be easily decoded when the request is intercepted by a malicious third party. Please note that this authentication method is secure only if SSL is enabled.
auth_key
auth_key: sales:p455wd
It's an authentication rule that accepts HTTP Basic Auth. Configure this value in clear text. Clients will need to provide the header e.g. Authorization: Basic c2FsZXM6cDQ1NXdk
where "c2FsZXM6cDQ1NXdk" is Base64 for "sales:p455wd".
⚠️IMPORTANT: this rule is handy just for tests, replace it with another rule that hashes credentials, like: auth_key_sha512
, or auth_key_unix
.
Impersonation is supported by this rule without an extra configuration.
auth_key_sha512
auth_key_sha512: 280ac6f...94bf9
The authentication rule that accepts HTTP Basic Auth. The value is a string like username:password
hashed in SHA512. Clients will need to provide the usual Authorization header.
There are also available other rules with less secure SHA algorithms auth_key_sha256
and auth_key_sha1
.
The rules support also alternative syntax, where only password is hashed, eg:
auth_key_sha512: "admin:280ac6f...94bf9"
In the example below admin
is the username and 280ac6f...94bf9
is the hashed secret.
Impersonation is supported by these rules by default.
auth_key_pbkdf2
auth_key_pbkdf2: "KhIxF5EEYkH5GPX51zTRIR4cHqhpRVALSmTaWE18mZEL2KqCkRMeMU4GR848mGq4SDtNvsybtJ/sZBuX6oFaSg=="
# logstash:logstash
auth_key_pbkdf2: "logstash:JltDNAoXNtc7MIBs2FYlW0o1f815ucj+bel3drdAk2yOufg2PNfQ51qr0EQ6RSkojw/DzrDLFDeXONumzwKjOA=="
# logstash:logstash
The authentication rule that accepts HTTP Basic Auth. The value is hashed in the same way as it's done in auth_key_sha512
rule, but it uses PBKDF2 key derivation function. At the moment there is no way to configure it, so during the hash generation, the user has to take into consideration the following PBKDF2 input parameters values:
The hash can be calculated using this calculator (notice that the salt has to base Base64 encoded).
Impersonation is supported by this rule without an extra configuration.
auth_key_unix
auth_key_unix: test:$6$rounds=65535$d07dnv4N$QeErsDT9Mz.ZoEPXW3dwQGL7tzwRz.eOrTBepIwfGEwdUAYSy/NirGoOaNyPx8lqiR6DYRSsDzVvVbhP4Y9wf0 # Hashed for "test:test"
⚠️IMPORTANT this hashing algorithm is very CPU intensive, so we implemented a caching mechanism around it. However, this will not protect Elasticsearch from a DoS attack with a high number of requests with random credentials.
This is authentication rule that is based on /etc/shadow
file syntax.
If you configured sha512 encryption with 65535 rounds on your system the hash in /etc/shadow for the account test:test
will be test:$6$rounds=65535$d07dnv4N$QeErsDT9Mz.ZoEPXW3dwQGL7tzwRz.eOrTBepIwfGEwdUAYSy/NirGoOaNyPx8lqiR6DYRSsDzVvVbhP4Y9wf0
You can generate the hash with mkpasswd Linux command, you need whois package apt-get install whois
(or equivalent)
mkpasswd -m sha-512 -R 65534
Also you can generate the hash with a python script (works on Linux):