This solution has a few prerequisites:
- A Virtual Private Cloud (VPC) with at least two subnets, each in its own availability zone
- An AWS Elastic Load Balancer (ELB), which serves traffic to the BIG-IP VE instances
- An SSH key pair which you need to access the instances.
You have two choices on how you want to deploy. You can go to the AWS Marketplace and search ‘f5 waf’ or you can go to the F5 Networks GitHub site. GitHub usually has the latest and greatest, so we’ll use that.
Click on the f5-aws-cloudformation spot.
And then click Supported.
And then click solutions/autoscale.
Then waf.
We scroll down a little bit and click Launch Stack.
We click Next at the Select Template screen and fill out the template.
When you get to the template, the Deployment Name will be appended to all the instances so you can tell which ones are yours. Since we already set up a VPC with two subnets in two zones (not regions), we’ll select those in the VPC ID field. The Restricted Source Address is available if you only want to allow specific IP addresses to your BIG-IP VE instances.
Next is the AWS Elastic Load Balancer name, then choose your SSH key – which is needed to connect to the instances. And we’ll leave the defaults for the rest.
Then you’ll get to the Auto Scaling Configuration section which is where you’ll determine when to create the new WAF instances. You’ll want to configure the Scale Up & Scale Down Bytes Threshold which will, obviously, determine when one gets launched/added and when it is removed.
Under WAF Virtual Service Configuration, is where you’ll enter the application’s Service Port and DNS. In addition, if you wanted to automatically add application servers to the pool to have traffic will go to those without having to manually configure the BIG-IP, you can also add the Application Pool Tag Values which works great. Next choose your WAF Policy Level (low, medium, high) and click Next and Next.
Also, click the check box with indicates that you have the appropriate credentials to set some IAM roles and create a S3 Bucket. Click Create and the CFT will start creating resources.
This process can take about 15 minutes to complete and when it is done, you’ll get the CREATE_COMPLETE on your dashboard. The resources might be available now but it is recommended to wait at least 30 minutes before digging into things.
To see what the CFT created and confirm completion, go to: Services>EC2>Auto Scaling Groups. You can see that there is a BIG-IP VE instance created and added to the group. Also, be aware that the default for Scaling Policies is to wait 40 minutes to launch a new instance. You may want to adjust that to your preference. However, to be clear, AWS is always monitoring the traffic and want to know if you are exceeding the limits you’ve set. The Scaling Policies setting simply means that after one instance is launched – you hit the limit and one is up – AWS should wait 40 minutes (or whatever your value is) to launch another. It’ll keep going until you’ve hit the max number of instances specified. We put three.
While in Services>EC2, you can also inspect the ELB and see that the BIG-IP VE instance is there and in service. Traffic is going through the Load Balancer and then to the BIG-IP VE, then to the application server.
Lastly, let’s look at the list of instances in Services>EC2>Instances and the instances are there and ready to go!
And then when there is too much traffic, another is added. Since the limit was exceeded, AWS has launched new instances, up to three.
And when the traffic falls, the instance shuts down.
That’s it! Easily scale your BIG-IP application security on AWS. Thanks to our TechPubs group and watch the video demo here.
ps
No comments:
Post a Comment