Scaling Performance with Drupal on a Budget

The Project

So this was a pretty interesting project. There was an existing application that exceeded the technical boundaries of it’s current hosting platform. In order to have the application work within those boundaries it needed to be rewritten from the ground up as a lot of technical debt had accrued. The goal was to determine alternative hosting solution that managed away some of the day to day workflows of keeping the underlying technologies.

Hosting Solutions

PAAS Providers

This is a pretty interesting space. There are many awesome providers out there that will let developers and site builders just do their thing without having to worry about the underlying infrastructure for hosting a Drupal site. The issue here though is the parameters in which the application needed to operate exceeded one or more limitations imposed by these providers. If you’re looking to just spin up a site and start developing, they are the way to go. In the Drupal space, I highly recommend checking out Acquia, Pantheon or They’re all pretty solid in their own right, with many options for how much performance you need and the level of expertise from their support staff.

Dedicated Servers or VPS Hosting

Managing underlying technologies such as the Operating System, MySQL, Apache/NGINX, PHP, Redis are not difficult when you’re familiar with them, but they do incur their own sets of risks when updating releases, securing them, and managing their configurations. These types of activities simply did not fit into the scope of this project’s budget in terms of costs associated with the labor for maintaining these services. With this constraint we had to rule out spinning up any kind of dedicated server or vps for hosting the application.

Amazon Web Services vs Microsoft Azure

Working under the constraint that this application would not be on a dedicated server or vps, the next solution was to consider a managed hosting provider such as AWS or Azure. This solved the immediate criteria of allowing the vendor to be responsible for the day to day management of the OS and technology stack and still allowing tuning of the stack’s configuration for the application run properly.

Scope of Technology

The types of hosting services provided by AWS and Azure are pretty robost. This is more true for AWS than it is for Azure. In order to determine if these platforms will serve the needs of the application we should define the services we will need for our application to run properly.

  • Web Server
  • PHP
  • SQL
  • Redis
  • File Storage

Amazon’s Web Services

If you can dream it, you can build it on AWS. The catch is, you need to know AWS. The amount of customization you can do here is unreal. This was the first attempt at hosting the application outside of it’s existing environment.

I want to start off by saying…HOLY OPTIONS batman! I’ve, personally, had very little experience with AWS. I’ve used it before and currently use it for some projects, but not on this scale. Outside of just using S3 or some VMs, I’ve never spent the time to put their various services together in a cohesive application. Just looking in the database section, you’ve got their RDS product which has pretty much any flavor of database to suite your needs, including Amazon’s own flavor of MySQL. Need a web server? Same thing. Just this mountain of options alone set me back many hours of research to make sure I was making sound decisions.

Given the scope outlined and some research, I spun up an RDS instance with MariaDB in a few clicks and I was on my way letting the database import as I moved on to the other components. File storage was pretty straight forward to configure an S3 bucket as there is a pretty solid set of modules for using S3 storage with Drupal that are well supported. So far, so good, but this is where the easy part forked off into a management issue.

As I am already a few hours into the project, I was hopeful the next steps of configuring the web server and redis would be just as quick so I could start testing the site. This, sadly, turned out not to be the case. Starting down the rabbit hole, I was confronted with a myriad of methods for deploying the webserver technology. In my preliminary research to make sure AWS would fit the budget, I had determined that the EC2 instance would a solid route to go and documented the various tiers of service that fit the budget’s spectrum that we could adjust as the application demanded once everything was up and running. In reality, the instance tier, to use an automotive analogy, was just the powertrain. There were many approaches to utilizing these instances in ways that accomplished the end goal of hosting a Drupal app, but varied wildly in complexity of assembling the pieces, or did not fit the requirements of the project itself.  Most of the documentation I had found around deploying Drupal to AWS simply involved standing up a container with your preferred flavor of linux (Amazon even has their own) and deploying your tech stack, then off you go. As this was ruled out in the Dedicated/VPS hosting section, we had to rule this out. The next option was prebuilt images for the EC2 container service. These images from a variety of sources often came already prepared for Drupal, or a specific tech stack such as Apache or NGINX. As I am familiar with working with containers, I would not consider myself proficient enough to just spin up some containers on an image and call it a day. It still leaves the matter of making sure the tech within containers are kept up to date, which is hoping the image maintainers do this or managing this myself. Rolling my own was just right out of the question.

I had finally found a few articles centered around using AWS Elastic Beanstalk. This seemed really promising as it’s goal is to remove all the complexity around managing the tech and just getting to the business of pushing your application’s code up and configuring it to hook into other AWS services. Setup was rather simple with selecting the performance tier and php version, then we were off to the races of getting the application stood up. At this point the existing application code needed to be rolled up into a zip file and then uploaded as the source version. After a few minutes of uploading and provisioning, the code was live. The next step was to get the application talking to the database and where the fun began. In order to make any application changes, the whole application needed to be zipped up with the changes and uploaded in full. The settings.php file was updated to connect to the database and produced the first of many problems, it wouldn’t connect. Reading through the articles again, it looked like I had forgotten the step to add the application’s network to the same security group as the database to allow connectivity. Upon attempting that, I found myself unable to select the EC2’s network to add to the RDS security group that was automatically created. After trying a few browsers, I ended up scraping this instance and recreating it, this time being sure to create the instance with the same network as the RDS. This had appeared to work as we got some semblance of the Drupal application showing. After some painful testing (this application really needs redis to perform properly) the site ended up throwing an error and then it refused to connect to the database after that. Multiple attempts at restarting the RDS instance and the EC2 instance did not yield any change. After spending quite some time recreating the whole setup from scratch I was never able to get it to talk again, much less move on to the important step of setting up Redis via AWS Elasticache. At this point about 8 hours had gone into this with the workflow and troubleshooting becoming a pretty involved process for myself, much less handing this off to a developer to work on.

Microsoft Azure

I want to start off by saying, I was not looking forward to using Azure given Drupal is primarily a *nix dependent CMS. That being said, I was pleasantly surprised by it. If the solution just involved spinning up some VPS’s then it didn’t really matter what we used, but from previous experience with a .NET application, I was curious to see how their web services offering would work with PHP. From an initial scope of what things will need to be put together we were looking at some database service, Azure App Service, and Redis Cache. Just like AWS, this seems simple enough and hopefully will be in practice.

Some preliminary work in Azure is needed before we start allocating resources. Azure is pretty flexible when it comes to billing and grouping things together. For the first step, we needed to isolate the billing. For that you create a new subscription and associate a payment method with it. This way all the resource costs associated with this application show up as one line item and we don’t have to pull them apart from the other services we currently have in Azure. The next step was to create a resource group in the subscription since most everything needs to be associated with one. This allows all the services that get spun up to be attributed to the group and makes located the specific services easier. This was accomplished in all of 10 minutes and we are ready to start spinning up services for the app.

Starting with Azure App Services, the process of selecting a performance/feature tier was really straight forward. Pick how many cores and how much memory you need and then select Basic, Standard, or Premium based on the features you’d like the application to have. For the start of this, I chose the Basic plan B2 with 2 cores and 3.5GB of ram. To stand up an actual app, you initially need to create a Web App, plug in some basics, such as the app name (will also be used as the dns address,) the resource group, subscription, and the Azure App Service Plan. Within a few minutes the app was up and ready for me to add in code. Here there are many options for pushing the code up into the app. There is the traditional was of just uploading the code to the app’s storage directly, but the other options actually solved the workflow issues I ran into previously. For the app, there was an option to select a repository service. Github was chosen and after a few authentication steps, the app was able to see the list of repos available and I could select the app’s repo and even the branch of the code that I wanted to deploy. Within minutes, of configuring this, the code was live on the app and any revisions made could just be made against the repo directly and the Azure Web App would pick it up and rebuild the container.

Now that the code was live, it was time to give it a database to talk to. Like AWS’s EC2, there are a lot of options for accomplishing the same thing. A host of providers offering DB services that were in all the MySQL flavors. After some research, many of the tutorials suggested ClearDB so I went that route. After about 10 – 15 minutes, a database was available and I could begin importing a copy of the database via mysql cli. Within an hour (pretty good sized database, even with the cache tables truncated) there was a database setup. Gathering the connection string information, a quick update to settings.php and pushing it to git and we were connected. It was really as simple as that, except the site was…slow. This was expected to some degree as the application depends heavily on Redis for the bulk of it’s performance, but it still seemed too slow. In the moment, I was satisfied that it just worked and moved on to deploying Redis.

With the application code and database stood up, it was time to get performance back on the site. As with everything else, there is not just one way to deploy Redis depending on what you’re comfortable with. The scope of this project made it pretty easy with Azure. Microsoft has their own Redis service called Redis Cache which is just that, a redis instance that you can connect to and no need to worry about the underlying technology stack. Same story as before, where you pick the right price/performance combo and roll it out. For this, I wanted to start small as to scale up if the application really demanded it. I went with the Basic C0 tier that offered a shared instance with 250MB of memory. After about 15 – 20 minutes the instance was up and ready for me to connect to. With PHP 7.0 there is really only one supported path for this and that is to use phpredis. I made the assumption that this was done for me and went about updating and uncommenting the Redis configuration block in settings.php. Upon pushing the code to Github, I was greeted by a 500 error that indicated Redis and PHP were not talking and a brief review of the the configuration showed me that Redis Cache, by default, only talks on SSL port 6380. Googling around indicated that phpredis does not, yet, support SSL and so I just had to go to the Redis instance and allow non-ssl traffic through. Once a few minutes had passed and Redis was reconfigured, I still received a 500 error on the application that indicated it wasn’t talking. At this point, I wondered if the phpredis module was even present in the App Service. To confirm, I resulted to creating a phpinfo page that would dump all the needed information about the environment. As this was going entirely too smooth, it turns out no phpredis module was loaded. Searching around, I found some helpful tutorials that explained how to load modules into an Azure app and figured I would give it a go with phpredis as the tutorials were not specifically about this use case. The first thing I needed was the phpredis dll, as this was running on a Windows based Azure App Service. As luck would have it they can be found here: The tutorials I had followed made sure to note that a VC9 NTS version of the dll was used. Unfortunately for the tutorial, this was rather dated. The site did have a VC14 NTS version so that is what I decided to pull. For this process, 4 things needed to be done to get the app to load phpredis.

  1. If you use Git on Windows, you can’t just commit a *.dll file as they’re globally ignored. To fix this open %UserProfile%\Documents\gitignore_global.txt and comment out the *.dll line
  2. Tell the PHP application to enable the extension and where to find the dll
    1. Create two folders: ‘ext’ and ‘ini’ in the application’s root
    2. In the ‘ini’ folder create the file ‘extensions.ini’ with the following text
    3. Copy the php_redis.dll that you downloaded earlier to the ‘ext’ folder
    4. commit the changes and push to the repo
  3. Let the Azure App know to look for additional .ini files
    1. Open the App instance in the Azure Portal
    2. Navigate to Application Settings
    3. Add the following entry to the App Settings subsection
      Value: D:\home\site\wwwroot\ini
    4. Save and restart the application

Once this was done, phpredis now showed as loaded via the phpinfo page and the site came up without a 500 error and thus concluded the basic requirement of getting the site up and running with as little infrastructure management as possible.

Tweaking for Success

Move That Database!

At this point, I had about 8 hours invested in Azure with only about 2 hours of actual work that was required to get the application stood up. Much of the time was spent on research and experimentation, but the application was working, just not as quick is it should have been. I was quite proud of this accomplishment since it had absorbed a few days of my time to finally have something to show for the time invested, but I wasn’t pleased with it. The application still seemed terribly slow compared to where it was currently hosted and there must be a few things that could bring performance on parity with it’s existing infrastructure. From here I started looking into the application for clues as a straight up Drupal site, likely doesn’t have these performance issues otherwise no one would recommend this setup for even a basic site. After some discussion with the developers and performance testing, I was able to confirm that the site doesn’t put a whole lot of strain on the Redis or App Service instances. What I began to suspect was the issue was with the database. ClearDB, as I came to find out, is not actually hosted by Microsoft, but is it’s own service. At this point, it is the only non-Microsoft piece in this configuration and I should probably see how it is doing. ClearDB has a dashboard to show load on the database, but I found that rather unreliable as it would sometime refresh stats quickly, load very slowly, or just break altogether. The other clue was that this application had a habit of opening many, many sql connections when compiling views and if latency was really high on these connection then they would compound into the long load times that were currently being experienced. I decided to give the Azure Database for MySQL (preview) a go since it was a native Azure offering and could hopefully give me better insight into DB performance. Another hour later I had the instance stood up with a copy of the database loaded. A quick update of the settings.php and we were hooked up. The site was noticeably faster from the start. It seems that many of the views that made several independent connections to the database were faster as well, which lead me to conclude there was a big latency issue on ClearDB. While the application was not faster than the existing hosting, it was on par. Heavier pages were a little slower than current, but the lighter pages were just as quick. There was a light at the end of the tunnel to have something worth handing off to extensive testing and letting the client validate the solution.

Adjusting PHP

One of the goals of this process is to ensure better flexibility around what the application, in it’s current state, needs to function properly. In the new Azure environment this flexibility is granted via a .user.ini file in the application’s root directory. Here we are able to override the global php.ini file with settings we require for this to happen. For what was needed on this application, the .user.ini file committed to the git repo worked perfectly.


Through several days and one all-nighter I was able to finally bring together a solution that met the requirements of the project. At this point there are still a few more pieces to clean up, but I will save that for the next article once the current configuration is validated. For now the project is in the hands of capable developers to validate the application runs as intended and raise any concerns they have prior to switching over to the application. This was a fantastic learning process and a chance to expand my knowledge of AWS and Azure as I have given either of them much time due to these cloud platforms not being a fit with previous work or budgets. I am fairly certain what was accomplished in Azure can be done via AWS in the hands of a more seasoned AWS administrator, but given the ease of setup and the time constraints on this project, Azure fit the bill with simple administration and a clean workflow for our developers.

Leave a Reply

Your email address will not be published. Required fields are marked *