In the last post, I talked about proxy servers and how they are an integral part of the cloud and microservices-based design. It makes sense to get our hands dirty and try out setting a reverse proxy. I have chosen the Envoy proxy for this example.
Envoy proxy can be installed on most of the popular OS and also has a docker installation. The site gives very good documentation for getting started –https://www.envoyproxy.io/docs/envoy/latest/start/start. For this example, I am setting up an envoy proxy on Mac OS.
First of all, make sure you have brew installed. Then execute the following commands to install envoy proxy
brew tap tetratelabs/getenvoy
brew install envoy
Finally, check the installation: envoy --version
Once we are satisfied with the overall installation, we can run a simple test case, where we will redirect traffic coming to the envoy proxy to an application server. To start with, I have deployed a simple hello world application on port 1111.
$curl localhost:1111
Hello World
Now in order to get started with envoy proxy, we will take simple YAML settings file and modify it to our needs. https://www.envoyproxy.io/docs/envoy/latest/configuration/overview/examples#static
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 127.0.0.1, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: ingress_http
codec_type: AUTO
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route: { cluster: some_service }
http_filters:
- name: envoy.filters.http.router
clusters:
- name: some_service
connect_timeout: 1s
type: STATIC
lb_policy: round_robin
load_assignment:
cluster_name: some_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 1111
A couple of things to observe in the above YAML file, first observe the filter we are using here. We are using envoy.filters.network.http_connection_manager which is the simplest filter we use to redirect traffic. The second important thing to observe is the source and destination ports. We are listening to the traffic at port 8080 and redirecting to port 1111. This is a simple example where both envoy proxy and application server are running on the same local machine, whereas this would not be the case in the real world and we will see more meaningful usage of cluster address. For now, let’s run and test the envoy proxy.
Start envoy proxy with the config YAML created.
envoy --config-path ~/Desktop/workspace/envoyexample.yaml
Once the envoy is started successfully, you will see we are able to hit port 8080 and getting the output from port 1111 that we started earlier.
$curl localhost:8080
Hello World