Last Call is a delight to work with – not only are they top-notch developers, they are great communicators, even with the least tech-savvy amongst us. My favorite part about working with them is their unfailing can-do attitudes and ability to follow through on even the tightest deadlines. We’ve thrown all sorts of crazy complicated requests at them and they surpass our expectations every time. Last Call makes us look like web rock stars… they are so good I almost don’t want to let the word out!
Danielle Cranmer, Web Manager
When we met, Rainforest Alliance needed to upgrade their existing Drupal 6 website to the latest major version, Drupal 7.
We developed and implemented an upgrade and migration path for the site with 85 modules, including 35 custom modules, to bring the site to a fully functioning Drupal 7 build. The upgrade was fully developed and its deployment was seamless. The Rainforest Alliance site went from Drupal 6 to 7 with zero downtime.
We enjoy a strong relationship with the Rainforest Alliance team, working together to continuously deliver strategic value in their digital properties, and were proud to be chosen for a full site redesign and upgrade.
Our work continues as the Rainforest Alliance’s development team, embedded within their internal Web Service Department, scaling our resources up and down as needed.
We’ve been proud of our long standing support to the CIO Executive Council, a subsidiary of the International Data Group (IDG).
Their flexibility has allowed me to interact with them as an ad-hoc branch of my own IT department, responding to projects and help-desk issues with equal competency.
Steve Wills, Sr. Manager, Applications Development at the CIO Executive Council at IDG
How we did it
We enjoy working as a team to deliver on our full service commitments.
We deliver a range of expertise to provide solutions for things like integrating with SalesForce to pull in membership data, integrating for set automated set up of group based content access on their subscriptions driven web service. Another example, moving them to a highly available, scalable cloud based infrastructure with Apache Solr and high performance caching technologies.
StoryCorps is an independently funded organization that collects, shares, and preserves people’s stories to remind people of our shared humanity, to strengthen and build the connections between us, to teach the value of listening, and to weave into the fabric of our culture the understanding that everyone’s story matters. All collected stories are stored in their online archive, accessible to the public upon submitting a request or listening to recordings at various public library listening rooms. StoryCorps reached out to LCM for ongoing support and assistance with migrating their site’s archive of roughly 27TB worth of interviews and information to a new AWS platform.
The main StoryCorps Archive access point was built on a robust Drupal platform consisting of over 60,000 interview records and approximately 27TB of associated metadata, WAVs, MP3s, JPGs and PDFs. The StoryCorps Archive platform connected with several critical business systems and performed around-the-clock ingests from their onsite storage arrays to the Drupal system, via rsync. StoryCorps was looking for a trusted and capable firm to migrate their entire Archive— including the website, connected services, and media— from their single-server host to a combination of Amazon Web Services (AWS), EC2, S3 and Glacier.
Last Call Media performed a thorough analysis and audit of all StoryCorps’ source data prior to and following the massive migration. We worked closely with StoryCorps’ internal Digital Team and engineering consultants to design, test, implement, and ultimately maintain the new AWS server infrastructure.
The archive is now running smoothly on a robust AWS setup, configured to allow the platform to efficiently scale and grow as the archive does; to the next 27TB and beyond.
As part of our ongoing engagement with the Commonwealth, we identified an opportunity to improve customer and constituent experience by leveraging the Cloudflare CDN (Content Delivery Network). Following the initial discovery phase, we architected, implemented, and deployed Cloudflare’s Global CDN product to give the site best-in-class content delivery and caching, while maintaining all the functionality of the previous CDN while improving development capabilities.
Creating Undetected Changes.
In the discovery phase, we reviewed the marketplace to find the most appropriate CDN for the State’s use case, balancing security, performance, and cost considerations. Ultimately, Cloudflare was selected as the best fit because of its extensive firewall and DDOS protections, and granular cache control using “Cache Tags,” which have the potential to boost performance for the constituents and reduce the risk of site instability.
The first, and perhaps most critical concern we addressed in the course of this project was that the CDN needed to be resilient, serving pages even if the site itself was not functioning properly. For example, the development team does code releases periodically that take the backend of the site completely offline, but constituents still need to be able to access content during this time. To meet this requirement, we adjusted the site’s caching headers to include directives to serve cached responses in the event of an error response received from the origin. As a result, constituents are able to access the majority of Mass.gov, even if a catastrophic event takes the web servers completely offline.
As a government website, Mass.gov is always at risk of attack from malicious actors. To mitigate this risk, Last Call Media undertook extensive configuration and testing of Cloudflare’s various security features, including the Web Application Firewall (WAF), DDOS protections, and custom firewall rules. We had a few hiccups along the way with configuring the security features (at one point, content authors were receiving CAPTCHA verifications when submitting their changes), but were ultimately able to work through these issues to dial in the right balance of security and ease-of-use.
These workers were also integral to the success of this migration beyond what we had initially imagined. During the testing and release phases of the project, they gave us a mechanism for fixing changes that was reviewable and testable. Having a well-defined review and deployment process improved the team’s visibility into what changes were being made, and let us avoid silly mistakes. Overall, we felt the development team’s velocity was greatly improved by using this workflow.
The migration went as smoothly as possible, there were no negative results.
I hope you are puffed up with pride. We simply couldn’t have done any — much less ALL — of this mountain of work without you. You’ve been a rock. Well, a very hard-working and creative rock. We are so lucky to have your help.
Lisa Mirabile, Project Manager
For the future we envision phase 2 to be granular cache invalidation. This would mean when a piece of content changes, the CDN would only invalidate only that piece of content so it’s fresh in the cache. What that allows us to do is set really long cache lifetimes on the edge content. We would be able to cache pages for even a year and rely on the invalidation to make them fresh when we need to be, reducing the load on the backend servers significantly. In its current state, pages now only get cached for 30 minutes. With a longer available cache time, we’d see immediate results for less load time, it would be less costly for infrastructure, and lessen the chances for a backend disruption.
Pegasystems Inc (Pega) is the leader in cloud software for customer engagement and operational excellence. Businesses rely on Pega’s AI-powered software to optimize customer interactions while ensuring their brand promises are kept.
Pega’s development team grew rapidly and was suddenly faced with technical challenges of delivering value to customers confidently and at speed. Pega hired Last Call Media to help improve, automate, and scale their deployment pipelines, and implement DevOps best practices.
How we did it
After studying Pega’s workflow to identify areas of improvement and discuss pain points, we found that the team’s main challenges were around deployment. Developers did not have a strong sense of confidence in the deployability of the code and team members still relied heavily on manual testing during the development cycle. This caused delays that frustrated the team and impacted the business agility.
We all agreed that the development team should be able to write automated tests to rely less on tedious manual testing. With this goal in mind, we looked into removing blockers for the Pega team.
The development team should be able to write automated tests to rely less on tedious manual testing.
Pega’s hosting provider at that time was making it prohibitively expensive to spin up the desired number of environments to test on, meaning, we had to find a cost-effective solution to spin up and spin down environments to enable automated testing. Providing those initial environments would also unlock the ability to demo and review work more easily before deploying to production.
Pega is a highly technical group and we wanted to fully empower them to do their best work confidently. After reviewing and discussing internally and with Pega all of the various options on the market and the costs associated, we chose to develop a custom solution built on Kubernetes in AWS. We made sure that all configuration of the entire system was built with Terraform and in a Github repository, meaning anybody on Pega’s team could dig into the codebase to understand it and make changes in the future. Fully documenting the new system and how to use it has always been a priority for any client engagement.
Fully documenting the new system and how to use it has always been a priority for any client engagement.
With that foundation set up, we were quickly able to load Pega’s application into Docker images, write sample automated tests and give their development team the ability to deploy an unlimited number of environments, one per Github branch, at a reasonable cost.
As a result of our work with Pega, their team gained confidence in its automated building and testing systems to bring the company closer to continuous deployment. Pega can now confidently deliver incremental improvements in small batches. This improved the quality of their product, and helped uncover bugs before they reached production.
Pega can now confidently deliver incremental improvements in small batches. This improved the quality of their product, and helped uncover bugs before they reached production.
Releasing smaller batches of work led to low-risk deployments and allowed improvements to reach the site’s users faster. Automating testing lowered costs and improved quality. The Pega team was able to rely on computers to do the repetitive tasks computers are good at and freed up Pega team members to do what humans are good at: listening to customers, solving problems, and being creative.
Finder helps millions of people worldwide make better decisions by allowing them to compare a wide range of products and services. Finding the right credit card, buying a home, or getting health insurance can be a daunting task. Finder.com makes the research more straightforward, and consequently, can save users time and money.
Since a major part of Finder’s business is to compare numerous products, Finder.com needs to provide users with tools that are quick and easy to use, while still displaying a wealth of dynamic information. In order to accomplish this goal and give users a great experience, Finder’s team of developers needs to be able to continuously improve the platform, iterate quickly to regularly evaluate what it is that delights users, and drives engagement for the platform.
Last Call Media supports Finder’s efforts to build an exceptional digital experience for its users within the WordPress platform. We also helped their software development team rethink the way they deploy to production.
How we did it
On a daily basis, dozens of developers work simultaneously on Finder.com, adding new functionality, fixing bugs, and creating new ways to provide value to users. As the team scales, it is faced with several challenges around managing the deployment pipeline.
Until recently, developers were often limited and blocked by a complicated build process, where any code change would take about 12-16min to go live. Making that change often required modifying multiple repositories since all the themes and plugins were split and could not be fully decoupled. Deploying to staging environments was a manual process, and code reviews required lengthy instructions and thus were error-prone. The developer that authored a change, needed to remember to merge all repositories for a successful deployment or risk bringing down the site.
Speeding up the deployment process
An ineffective deployment pipeline can easily add up to hundreds of hours of time a month waiting for a build to complete that could have been spent on delivering other functionality instead. The development workflow should be smooth with stumbling blocks removed. Since Finder’s engineering team is growing rapidly, this problem needed to be addressed first.
To speed up time to market, we overhauled the build process at Finder. We implemented Buildkite to enable continuous integration in a more efficient way. As a result, build time decreased by more than 50%, to only 8min.
This increased efficiency sped up the process for Finder’s developers to get their work onto staging environments and out to production, and in the hands of customers.
Offering stability to the development team
Another challenge for Finder was the stability and time it took to deploy a hotfix to production. If multiple developers were trying to deploy to the product at the same time, they would often frustratingly block each other, further increasing everyone’s time to production.
To release this functionality, we identified and rearranged certain key jobs so they could be run in parallel. We also identified build steps that could be made more efficient. For example, there were many jobs downloading the same libraries, and when the Docker image was built in the end, these libraries were again downloaded, synchronously. By limiting this, speed was improved significantly.
In the end, speeding up the build process, meant increased stability, and a decreased time to deploy a hotfix.
Building comparison tables
Now, after users make an initial selection, they can see the most relevant results for their case. We added advanced interactive features such as calculators to the top of tables so users can input data relevant to their personal circumstances and see automatically calculated savings for each product.
Taking advantage of the WordPress platform and Buildkite, Last Call Media empowered Finder’s development team to deploy to production efficiently. The improved build process enabled developers to get their work to production and in the hands of customers much quicker. Additionally, Finder’s comparison tables now present results to users that are personalized making their experience so much more satisfactory.
Now, Finder has a path forward to build stable, interactive comparison tools for users in a highly iterative way.