Wednesday, February 21, 2018

How to Set up F5 Application Connector


Last week we covered the basic overview of Application Connector and this week we’ll look at how to set it up.

Settle in, this is detailed. 😊

F5 Application Connector is made up of two components: The Proxy and the Service Center

Step one is to set up the Service Center on BIG-IP.

A brief overview of the Service Center steps:
  • Download Service Center template (rpm) file
  • Provision iRules LX
  • Enable iApps LX
  • Install and deploy the Service Center
First, let's go to the F5.Downloads.com site and download the template that we'll use to deploy the Service Center. It's an RPM file.



Now we’re going to log into the BIG-IP and under System Resource Provisioning>Provision, set iRules LX to at least nominal.

Now we're going to connect to the BIG-IP using SSH - in this example we’re using putty - and you're going to run this command to enable iApps LX.

Now back to the config utility, we're going to click iApps>Package Management LX and if you don't see this menu you're going to need to restart the BIG-IP and then you'll see it. Now import the RPM file that you downloaded and then upload it.

When it's done you go to Application Services>Applications LX. Now we're going to select the Application Connector Template...

...and here is the Service Center.

We’re going to scroll to the bottom and add an application name and then save it.

Now we're going to select the application and click Deploy. The ball next to the name should turn green.

Now on to Step 2 - Setting up the Proxy.

You can do this on a small Linux instance that's running in the cloud in the same virtual network as your application servers.

Here are the steps for The Proxy:
  • Download and deploy the Docker container file
  • Create virtual server for Proxy traffic
  • Add virtual server in the Service Center
  • Add virtual server in the Proxy
  • Authorize the Proxy in the Service Center
Start by downloading the Docker container from downloads.f5.com. It's the one with the .tgz file extension and copy this tgz file to your proxy instance.



We’re running Windows and using WinSCP so we’ll just copy it from our local machine over to the proxy instance.

Now back on the proxy instance on the Linux instance, we’re going to load the file and run a command to deploy the Docker container. If you look at the command a little more closely you'll see that we need to tell it apart, which in this case we’re using port 8090 and we’ll give it a username and password.

Again, in the setup guide you'll find all the details on all the parameters that you can use in this command.

Now we can see that the deployment was successful and it's running.

We go back to the BIG-IP and create a Virtual Server so that BIG-IP can accept incoming traffic from the proxy. This has to be on port 443 and for testing we're going to use the default client SSL profile.

In the Service Center, we're going to add the Virtual Server like you're going to select it. Click Config Proxy Virtual Server and then pick the virtual server and Save.

If we go back and look at the Virtual Server, you can see that has an iRule associated with it. That's how you know it was successful.

Now we’ll going to log into the Proxy with the port we specified and if your Proxy is in the cloud, it is make sure that you have the security rules so that this port is open. Again, in this case we used port 8090. We login with the username and password that we gave it and then in the Service Center connections area we’re going to add the Proxy virtual servers’ public IP address.

One last step is going to go back into the Service Center to authorize the Proxy and now you can see the Proxy in here.

Now on to the final step of adding your cloud nodes.

Here are the steps for The Cloud Nodes:
  • Create pool and virtual server for application traffic
  • Add the virtual server in the Service Center
  • Create AWS IAM role
  • Add node to the pool
On the BIG-IP, we’re going to create a pool and select one of these application connector monitors.


For now, the pool is empty and we create a virtual server for the application traffic, pointing to that pool.

Now we go into the Service Center and we tell it. ‘hey this is my virtual server for application traffic.’

To automatically add notes to the Proxy - in the AWS example – we’re going to create an IAM role...

...and then associate it with the Proxy instance.

And then we’re going to need to restart the Proxy and now we can go into the Proxy and see that I was authenticated by AWS.

And there are the nodes! The list is showing both the Proxy instance and the application servers but they're all automatically published at BIG-IP.

If we go back to BIG-IP, we can see the nodes in the Service Center.

Then we can go to the pool and we can choose them from a list. They're displayed here but it's important to know that these nodes are not exposed to the Internet and it's as if the nodes are local to the BIG-IP for more details see

Congrats! You’ve configured and deployed F5’s Application Connector. You can watch the step through video here.

ps

Related:

Tuesday, February 13, 2018

F5 Application Connector Overview


Today, let’s take a look at Application Connector. Application Connector connects public clouds to your application service infrastructure within cloud interconnects or data centers. This enables the use of public cloud resources as part of your compute infrastructure while also performing workload discovery and deploying consistent app services across your multi-cloud environments.

The idea behind Application Connector is to have your applications in the cloud but have them considered local to BIG-IP so they don’t have any internet access. BIG-IP gets traffic from the nodes via secure web sockets connection. You can use Application Connector across multiple clouds and you can keep the same virtual server address that you use now. If you’ve been hesitant about moving your applications to the cloud due to worries about security, this is a way to move to the cloud while still using your BIG-IP.

This diagram shows a basic Application Connector set up. You can see it is made up of two components – the Service Center which runs on BIG-IP and the Proxy which runs on a Docker container in the cloud with your application.

This is what a running version of the Proxy looks like. This webpage is running on a Docker container which is running on a lightweight Linux instance in this example on Amazon Web Services. In the top right, you can see we got authentication set up with AWS. Under Proxy Stats, you can also see some details about aggregate traffic passing through the Proxy to the application servers. And under Service Center Connections, you can see the BIG-IP that is associated with the Proxy.

And below that under Published Nodes, you can see the list of Published Nodes. Published means that BIG-IP has these nodes available.

Let’s take a quick look at a few possibilities for adding and removing nodes.

Let’s say that these nodes are used in BIG-IP as pool members, so traffic is going to them. If I want to stop sending traffic to one of the nodes, we can simply disable it temporarily and if we’re done with a node, we can delete it completely. This is useful if you are on the Dev Team and you have access to the Proxy but you don’t have access to the BIG-IP. Without contacting IT, you can start and stop traffic to the application.

What happens if I delete a node? If we scroll down a bit more, there are three options: we can auto-publish nodes to BIG-IP or we can easily auto discover them. This means the Proxy will show you the nodes and you can choose whether to publish to BIG-IP.

We went ahead and deleted one of the nodes and now that node appears under the Auto Discovery selection.

And we can decide if we want to publish to BIG-IP.

You also have the option to manually add nodes so no matter where your nodes live, in Azure, Google, AWS or your data center, you can add them here and they’ll communicate with BIG-IP via secure web sockets connection.

Now let’s turn to the BIG-IP. Here is the Service Center and it’s in the iApps section under Application Services>Applications LX. Here, we can see a visual representation of my active Proxy and its related nodes.

If we click Proxies, we can see the Proxy here and if we want to stop authorizing this Proxy we can. This will stop traffic going to these nodes.

If others in the organization add Proxies, we can go in and authorize them.

In addition, if we click API, we get a list of all the programmatic ways we can interact with Application Connector.

Now, on the BIG-IP, if we go to Local Traffic>Pools>Pool List we can look at the pool associated with this deployment. Let’s click Members. We can see that the nodes we’ve been working with are available for us to add to a Pool.

You'd use Application Connector if you’re multi-cloud since it doesn’t matter where you nodes are, BIG-IP considers them local. From a security perspective, no public IPs need to be associated with your applications and keep your encryption keys on BIG-IP and share them across clouds. And the consistency to have BIG-IP services like load balancing, WAF, traffic manipulation and authentication are all centrally managed on BIG-IP. After your initial configuration, no real management needed for low maintenance.

The licensing is included with the iSeries appliance and available as an add-on for other platforms. You can watch the Application Connector – Part 1: Overview video from our TechPubs team.
ps

Tuesday, February 6, 2018

The DevCentral Chronicles Volume 1, Issue 2

If you missed our initial issue of the DC Chronicles, check it out here. The Chronicles are intended to keep you updated on DevCentral happenings and highlight some of the cool articles you may have missed over the last month. Welcome.

First up, 2018 will be the year that we publicly open up speaking proposals for our Agility conference this August. Historically, the presenters have been F5 employees or partners but this year, we’d love if you wanted to share your BIG-IP expertise, knowledge and mad-skillz to the greater F5 community. Review the info here and submit your proposal by Friday, Feb 9.

Next up is our exciting new (and FREE!) Super-NetOps training program. The Super-NetOps curriculum teaches BIG-IP administrators how to standardize services and provide them through automation tool chains. For Network Operations Engineers you can learn new skills, improve collaboration and advance your career. For Network Managers and Architects, you can support digital transformation and improve operational practices. As Jason Rahm notes with his Lightboard Lessons: Why Super-NetOps, Super-NetOps is not a technology but an evolutionary journey. Already featuring two complete classes on integrating NetOps expertise into the benefits of a DevOps world, this training program is poised to help the NetOps professional take a well-deserved seat at the continuous deployment table. I’ve taken the training and it is amazing.

Speaking of Lightboard Lessons, John Wagnon is going through the OWASP TOP 10 in his latest series and is already on number 5 of the list, Lightboard Lessons: OWASP Top 10 - Broken Access Control. The OWASP Top 10 is a list of the most common security risks on the Internet today and John has been lighting up each in some cool videos. If you want to learn about the OWASP TOP 10, start here and follow along.

Interested in BIG-IP security? Then check out Chase Abbot’s Security Hardening F5's BIG-IP with SELinux. When a major release hits the street, documentation and digital press tends to focus on new or improved user features, seldom do underlying platform changes make the spotlight. Each BIG-IP release have plenty new customer-centric features but one unsung massive update is SELinux’s extensive enforcing mode policy across the architecture. Chase says that, BIG-IP and SELinux are no strangers, having coexisted since 2009, but comparing our original efforts to our current SELinux implementation is akin to having your kid's youth soccer team shoot penalties against David Seaman. Good one.

Also filed under security for this edition is the Meltdown and Spectre Web Application Risk Management article by Nir Zigler. Nir talks about a simple setting that can reduce the attack surface with the “SameSite” Cookie Attribute. If you’re worried about those vulnerabilities, this is your article.

This week, I’ll be at the F5 AFCEA West Tech Day on Wednesday Feb. 7 as part of the AFCEA West 2018 Conference in San Diego. A full day of technical sessions covering the challenges of DoD cloud adoption with a fun Capture the Flag challenge. Our friends at Microsoft Azure will also talk about solutions to address the complex requirements of a secure cloud computing architecture. There is a great article over on MSDN explaining how to Secure IaaS workloads for Department of Defense with Microsoft and F5. #whereisdevcentral

Lastly, don’t forget to check out our Featured Member for February, Lee Sutcliffe, Lori’s take on #SOAD The State of Application Delivery 2018: Automation is Everywhere and the new F5 Editor Eclipse Plugin v2 which allows you to use the Eclipse IDE to manage iRules, iRules LX, iControl LX, and iApps LX development.

You can stay engaged with @DevCentral by following us on Twitter, joining our LinkedIn Group or subscribing to our YouTube Channel. Look forward to hearing about your BIG-IP adventures.

ps

Thursday, February 1, 2018

DevCentral's Featured Member for February - Lee Sutcliffe

After a brief hiatus for the New Year, we're kicking off the 2018 Featured Member series with a new DevCentral MVP: MrPlastic, Lee Sutcliffe. Like Kevin this past December, Lee does a great job with the opening question, so we'll let him tell his story. A long-time DevCentral member and always engaged with the community, Lee Sutcliffe is DevCentral's Featured Member for February 2018. Congrats Lee!

DevCentral: First, please explain to the DevCentral community a little about yourself, what you do and why it’s important.
Lee Sutcliffe: I guess always enjoyed fixing and building things, taking things apart to see how they worked (admittedly, not always being able to put them back together again). From a young age, my younger brother would design something on paper and I’d have to build it with Lego. So it comes as no surprise that he is now an Architect and I’m an Engineer (of sorts). 

My first IT job was a sandwich placement; after two years of University you spend a year working in Industry before going back to complete your final year. The idea being, when you graduate you already have some level of experience in the real world as well as your degree to help you get on the job ladder. 
My placement was at a local high school as an ICT Technician, doing anything from network cabling to NT4-Windows 2000 migrations. 

After going back to University and graduating I spent a year being a typical long haired hedonistic backpacker commonly known as hippie before finally deciding I should stop enjoying life and go earn some money for a change. 

Since then worked in another high school as a Network Manager for three years before landing a job in 2009 with Callcredit, a credit reference agency in Leeds UK. It was here where I really cut my teeth and was able to develop my career as a network engineer using Cisco, F5 and Check Point technologies amongst others. 

I left the safety permanent employment in 2013 to become a freelance contractor, working for a variety of clients, mostly in the financial sector to where I find myself today at Lloyds Banking Group.
DC: You are a very active contributor in the DevCentral community. What keeps you involved?
LS: DevCentral always is my first port of call for anything I don’t know straight away. Members of the community have really helped me out over the years, especially in the early part of my career. I get a sense of satisfaction helping others and it’s important to give something back. For fear of sounding too altruistic, it is also a good way to keeping up to date and refreshing old skills, as well as learning new ones.
DC: Tell us a little about the areas of BIG-IP expertise you have.
LS: Like a lot of people BIG-IP LTM is my bread and butter and coming from a comms background it’s definitely the product I’m most familiar with, especially given its wide adoption. However, I have also worked a lot with GTM, APM and AFM, at the moment I’m working exclusively with iRules. I’m just starting to look into iRules LX which is a really interesting area.
DC: You are a F5 Consultant at Lloyds Banking Group. Can you describe your typical workday and how you manage work/life balance?
LS: I have been working as a contractor at LBG since July last year. I work within a team that are responsible for the maintenance and development of iRules used within the Bank, mainly for the online banking platform. Without disclosing too much, the use of iRules are vast, easily the most comprehensive I’ve seen anywhere, with custom proc libraries and in-house certificate management APIs to name but a few. For all intents and purposes, it’s a developer role which had quite a steep learning curve but I’m enjoying the challenges. Work life balance can be tricky, mainly because I work away in London during the week and home is over 300km away which means weekends can become a bit rushed and end too quickly.
DC: You have many F5 Certifications including Technology Specialist (LTM) certifications. Why are these important to you and how have they helped with your career?
LS: Having F5 certifications have certainly helped me align my career down a more specific F5 route. They haven’t been plagued by brain dumps so the certifications actually mean something. I also like how the exams are written, you can’t learn parrot fashion and you have to have had hands on experience working with the technology. I’d like to sit my 401 exam eventually but my limited ASM knowledge is currently preventing me getting all four CTS certificates – something I’m keen to resolve!
DC: Describe one of your biggest BIG-IP challenges and how DevCentral helped in that situation.
LS: I think my biggest BIG-IP challenge has been the adjustment to my current role. To go from a guy who ‘did wires’ to writing code for a living was a challenge, especially in the first couple of months. My first project at Lloyds was to develop a framework for a micro service which involved multiple separate iRules, hundreds of lines of code for session management and encryption services. However, one of my most memorable challenges was earlier on in my career and was actually quite simple, I do remember feeling particularly pleased with the solution. I had to create a monitor for a webmethod, at the time the required version of NTLM wasn’t supported as standard, I was racking my brains for ages when someone on DevCentral suggested I could use cURL and an external monitor. So I had to use tcpdump to capture the request, rebuild the XML using cURL, test the result then and use an external monitor to check the service. I remember how impressed I was, how customizable the product was and if it didn’t do it out of the box, there’s usually a way to do what you need.
DC: Could you also give the backstory to Mr. Plastic if there is one?
LS: As for my DevCentral handle; MrPlastic well that may take some explaining! I used to produce a form of hard, aggressive dance music called Breakcore under the pseudonym Monster Plastic and ran a club night in Leeds, with my brother where we played our music and booked guest DJs and producers. It was a mixture of jungle, hardcore, gabber and heavy breaks – your mother wouldn’t approve! Monster Plastic soon developed into Lee Plastic. As for the Mr? I don’t know, maybe I just got married and settled down!
DC: Lastly, if you weren’t an IT admin – what would be your dream job? Or, when you were a kid – what did you want to be when you grew up?
LS: I’d like to be able to work outside, I’m a keen rock climber and mountaineer and working in London doesn’t lend its self to getting out as much as I’d like. So I’d probably like to work as a climbing and mountaineering instructor. When I was younger I wanted to pilot search and rescue helicopters for the Royal Air Force but after university I was still enjoying partying too much and wasn’t quite ready for a twelve-year commission!

Thanks Lee! Check out all of Lee's DevCentral contributions, connect with him on LinkedIn and visit Lloyds Banking Group or follow on Twitter.