We're proud to provide services to our friends in Government.

Expand
Massive performance wins.

Processes
  • Continuous Delivery
Team Leadership

Last Call Media team members were invited to join the Digital Services team at Mass.gov, to help them operationalize their Drupal 8 platform following the public launch.

Mass.gov is the website of the Commonwealth of Massachusetts. The primary stakeholders are the constituents who visit the website, and the state organizations who publish content on and visit the website for aspects of their job. It receives upward of 15 million page views a month and changes to the site are released twice weekly by a team of developers. The traffic profile for the site is interesting, yet very predictable. The vast majority of the site traffic occurs between 8:00 am and 8:00 pm during the business week. Site editors are working during the business day (9:00 am - 5:00 pm), and releases happen after the work day is over (after 8:00 pm). On analytics graphs, there are always five traffic spikes corresponding with work days, except when there is a holiday—and then there are four.

LCM assisted in making some pretty dramatic changes on both the front and back end of the site; every action we took was in service of either site stabilization or improving content “freshness.” State employees need to be able to publish content quickly, and constituents need fast access to the information that’s being published, without disruptions while they’re visiting the site. These two needs can be considered opposing forces, since site speed and stability suffers as content freshness (the length of time an editor waits to see the effect of their changes) increases. Our challenge was to find a way to balance those two needs, and we can break down our progress across an eight-month timeline:

September, 2017

The new Mass.gov site launched after roughly a month in pilot mode, and we saw an increase in traffic which corresponded with a small response time bump. The site initially launched with a cache lifetime of over an hour for both the CDN and the Varnish cache. This meant that the site was stable (well insulated from traffic spikes), but that editors had to wait a relatively long time to see the content they were publishing.

November, 2017

We rolled out the Purge module, which we used to clear Varnish whenever content was updated. Editors now knew that it would take less than an hour for their content changes to go out, but at this point, we still weren’t clearing the CDN, which also had an hour lifetime. Site response time spiked up to about two and a quarter seconds as a result of this work; introducing “freshness” was slowing things down on the back end.

December, 2017

We realized that we had a cache-tagging problem. Authors were updating content and not seeing their changes reflected everywhere they expected. This was fixed by “linking up” all the site cache tags so that they were propagating to the pages that they should be. We continued to push in the direction of content freshness, at the expense of backend performance.

To address the growing performance problem, we increased the Drupal cache lifetime to three hours, meaning Varnish would hold onto things for up to three hours, so long as the content didn’t get purged out. As a result of our Purge work, any content updates would be pushed up to Varnish, so if a page was built and then immediately updated, Varnish would show that update right away. However, we saw very little performance improvement as a result of this.

January, 2018

Early in the month, we experienced a backend disruption due to some JavaScript changes that were deployed for a new emergency alert system. In development, we added a cache-busting query parameter to the end of our JSON API URL to get the emergency alerts. However, in the production environment, we were adding one additional, completely uncached request for every person that hit the site. As a result of this relatively minor change, the backend was struggling to keep up (although constituents saw almost no impact because of the layered caching). This illustrated the importance of considering the performance impact of every single PR.

Careful study of the cache data revealed that each time an editor touched a piece of content, the majority of the site’s pages were being cleared from Varnish. This explained the large spike in the response time when the Purge work was rolled out, and why raising the Drupal cache lifetime really didn’t affect our overall response time. We found the culprit to be the node_list cache tag, and so we replaced it with a system that does what we called “relationship clearing.” Relationship clearing means that when any piece of content on the site is updated, we reach out to any “related” content, and clear the “cache tag” for that content as well. This let us replace the overly-broad node_list cache tag with a more targeted and efficient system, while retaining the ability to show fresh content on “related” pages right away. The system was backed by a test suite that ensured that we did not have node_list usages creep back in the future. This earned us a massive performance boost, cutting our page load time in half.  

We found that the metatag module was generating tokens for the metatags on each page twice. The token generation on this site was very heavy, so we patched that issue and submitted the patch back to Drupal.org.

February, 2018

We had another backend disruption due to some heavy editor traffic hitting on admin view; our backend response time spiked up suddenly by about 12 seconds. A pre-existing admin view had been modified to add a highly desired new search feature. While the search feature didn’t actually change the performance of the view, it did make it much more usable for editors, and as a result, editors were using it much more heavily than before. This was a small change, but it took what we already knew was a performance bottleneck, and forced more traffic through it. It demonstrates the value of being proactive about fixing bottlenecks, even if they aren’t causing immediate stability issues. It also taught us a valuable lesson—that traffic profile changes (for example, as a result of a highly desired new feature) can have a large impact on overall performance.

We got a free performance win just by upgrading to PHP 7.1, bringing our backend response time from about 500 milliseconds down to around 300.

We used New Relic for monitoring, but the transaction table it gave us presented information in a relatively obtuse way. We renamed the transactions so that they made more sense to us, and had them broken down by the specific buckets that we wanted them in, which just required a little bit of custom PHP code on the backend. This gave us the ability to get more granular about what was costing us on the performance side, and changed how we started thinking about performance overall.

We added additional metadata to our New Relic transactions so we could begin answering questions like “What percentage of our anonymous page views are coming from the dynamic page cache?” This also gave us granular insight on the performance effects of changes to particular types of content.

We performed a deep analysis of the cache data in order to figure out how we could improve the site’s efficiency. We broke down all the cache bins that we had by the number of reads, the number of writes, and the size. We looked for ways to make the dynamic page cache table, cache entity table, and the render cache bin a little bit more efficient.

We replaced usages of the url.path “cache context” with “route” to make sure that we were generating data based on the Drupal route, not the URL path.

On the feedback form at the bottom of each page on the site, the form takes a node ID parameter, and that’s the only thing that changes when it’s generated from page to page. We were able to use “the lazy builder” to inject that node ID after it was already cached, and we were able to generate this once, cache it everywhere, and just inject the node ID in right as it was used.

We took a long hard look at the difference between the dynamic page cache and the static page cache. Without using the Drupal page caching, our average response time was 477 milliseconds. When we flipped on the dynamic page cache, we ended up with a 161 millisecond response, and with the addition of the static page cache, we had a 48 millisecond response. Closer analysis showed that since Varnish already handled the same use case as the Drupal page cache (caching per exact URL), the dynamic page cache was the most performant option.

We automated a nightly deployment and subsequent crawl of site pages in a “Continuous Delivery” environment. While this was originally intended as a check for fatal errors, it gave us a very consistent snapshot of the site’s performance, since we were hitting the same pages every night. This allowed us to predict the performance impact of upcoming changes, which is critical to catching performance-killers before they go to production.

As a result of all the work done over the previous 5 months, we were able improve our content freshness (cache lifetime) from 60 minutes to 30 minutes.

April, 2018

We enabled HTTP2, an addition to the HTTP protocol that allows you to stream multiple static assets over the same pipeline.

We discovered that the HTML response was coming across the wire with, in some cases, up to a megabyte of data. That entire chunk of data had to be downloaded first before the page could proceed onto the static assets. We traced this back to the embedded SVG icons. Any time an icon appeared, the XML was being embedded in the page. In some cases, we were ending up with the exact same XML SVG content embedded in the page over 100 times. Our solution for this was to replace the embedded icon with an SVG “use” statement pointing to the icon’s SVG element. Each “used” icon was embedded in the page once. This brought pages that were previously over a megabyte down to under 80 kilobytes, and cut page load time for the worst offenders from more than 30 seconds to less than three seconds.

We reformulated the URL of the emergency alerts we’d added previously to specify exactly the fields that we wanted to receive in that response, and we were able to cut it down from 781 kilobytes to 16 kilobytes for the exact same data, with no change for the end users.

We switched from WOFF web fonts to WOFF2 for any browsers that would support it.

We used preloading to make those fonts requested immediately after the HTML response was received, shortening the amount of time it took for the first render of the page pretty significantly.

We added the ImageMagick toolkit contrib module, and enabled the “Optimize Images” option. This reduced the weight of our content images, with some of the hero images being cut by over 100 kilobytes.

We analyzed the “Did you find what you were looking for?” form at the bottom of every page, and realized that the JavaScript embed method we were using was costing our users a lot in terms of bandwidth and load time. Switching this to a static HTML embed with a snippet of JavaScript to submit the form had a dramatic improvement in load time.

The Mass.gov logo was costing the site over 100 kilobytes, because it existed as one large image. We broke it up so that the image of the seal would be able to be reused between the header and the footer, and then utilized the site web font as live text to create the text to the right of the seal.

Additional efforts throughout this time included:

  • Cleaning up the JavaScript. We found an extra copy of Modernizer, one of Handlebars, and a lot of deprecated JS that we were able to get rid of.

  • We removed the Google Maps load script from every page and only added it back on pages that actually had a map.

  • We lazy-loaded Google search so that the auto-complete only loads when you click on the search box and start typing.

Our work across these eight months resulted in huge improvements in both the front and back end performance of the Mass.gov site. We achieved a 50% overall improvement in the back end performance, and a 30% overall improvement in the front end performance. We continue to work alongside the Digital Services team on these and other efforts, striving for the best possible experience for every single user and constituent.

See the BADCamp presentation about this work here.

Expand
Paid Family and Medical Leave for the Commonwealth of Massachusetts

Processes
  • Agile/Kanban
  • Agile/Scrum
Team Leadership
  • Senior Producer
    Kelly Albrecht

Last Call Media joined the Commonwealth’s Department of Family and Medical Leave (DFML) to implement new technology for assuring the stability of PFML claims intake and administration in time for launch of the New Year’s Day deadline.  

Last Call’s focus was to facilitate communication, quality control, and confidence among the teams—establishing an “End to End” vision of the applicant journey that crossed multiple layers of technology. Last Call Media was one of several teams that came together with DFML to achieve the ultimate goal of the project: to create a system that made applying for and managing PFML claims as easy as possible and to achieve a required careful orchestration between the teams working on the discrete components.

Earlier in the year, Last Call Media worked with the Massachusetts Department of Unemployment Assistance on a project in which we implemented automated testing and other automation processes. Word of the successful outcomes traveled through departments. When it came time to enhance the DFML’s program with DevOps automation, they contacted LCM.

What the DFML had was a team of teams each optimized to their own workflows and working on individual pieces of one greater product. The component-based team model increases efficiency as the larger technical foundations of a product are built, yet the integrations between those components can become a blind spot needing special consideration: integration testing was a known need in the project strategy. Last Call Media was brought on to be the integration testing team, and we knew from experience that concentrating testing of all functionality to a separate group, and as a final “phase” all work must pass through, leads to surprise issues arising too late in the life of the project. This was important as the timeline was one of the most important factors of this project: constituents needed to be able to apply for PFML benefits on January 1, 2021, no matter what.

As we began to work with the existing teams, we saw exactly what we could bring to the table: a strong strategy, clear approach, and defined process for integrating all work across every team, and systematically testing that work, as early in the development process as possible, so that fully tested product releases could be done with confidence and ease.

There were four main aspects of this project that needed to be considered in order to achieve success:

  • The claimant portal, built using React, where constituents would be able to submit PFML claims and receive updates about the status of those claims,
  • The claim call center, where customer service representatives would take calls from claimants and enter their claim information into the claimant portal,
  • The claims processing system, the tool in which customer service representatives can process PFML claims via a task queue (and which is fed information from the portal, call center, and other third-party tools), and
  • The API that would bring all of these parts together to work seamlessly.

Then, of course, there’s the testing. LCM began our work by establishing three types of tests that all project work would need to pass in order to be considered complete:

  • End-to-End (E2E) testing: automated continuous verification of the functionality and integration of all 4 systems.
  • Load and Stress testing: verifying the E2E functionality and integration under substantial strain to see what the system can sustain, where it breaks, what breaks it, etc.
  • Business Simulation testing: verifying if the people behind the scenes who will be doing this work on a daily basis can effectively perform said work with the systems and functionality that have been put into place, and whether this work can be performed when there is a substantial amount of it.

As we worked to set up the proper tests for the product, we found many opportunities to gain alignment across all of the development teams with our overall testing philosophy: it should be a part of each team’s workflow instead of a final phase removed from the team(s) performing the work. We helped coach each team on delivering value incrementally, and their eventual ownership of where the E2E testing suite impacts their work. LCM brought testing to the program and enabled the teams to absorb it as their own.

I have been impressed with you and team from day 1.

Matthew Kristen, Executive Program Manager, State of Massachusetts

Last Call Media came to the PFML project not just to establish automated testing, but to ask timely, hard questions about how the program was managing dependencies, how the sequencing of each team’s deliverables was planned, and how completed work was being demonstrated; when something wasn’t previously considered or prioritized, LCM made sure to find out why. Through the understanding that our experience in DevOps and application readiness affords us, we sought to shine a light into the cracks of the program, making it possible to deliver, with certainty, a functional and effective product to the constituents of Massachusetts.

Last Call Media takes an immense amount of pride in the difficult work all of the teams performed, and their willingness to embrace the testing processes we implemented within their workflows. With the successful launch of the PFML program, LCM is happy to see further proof of the strength of enabling teams to own 100% of their work.

Expand
Branding and print design.

Services
Processes
  • Continuous Delivery
Team Leadership

As part of an effort to support local commerce and community, Last Call Media partnered with the Downtown Northampton Association, an organization in LCM’s home city that seeks to improve the business and cultural strength of the downtown area through investments in programming, beautification, and advocacy.

How we did it

We joined the effort, bringing our expertise for a more beautiful downtown. In partnership with the DNA’s Executive Director and a board of local luminaries, Last Call Media helped brand the organization with print materials, signage and digital media, creating a universally recognizable identity for the organization, assisting with fundraising efforts, and sparking demand for co-branding materials from downtown businesses.

logo

collateral

brochure

Brochure detail

logo alts

Expand
A new design for PVPC.

Processes
  • Continuous Delivery
Team Leadership
  • Senior Producer
    Colin Panetta
  • Art Director
    Colin Panetta

The Pioneer Valley Planning Committee, the regional planning body for the Pioneer Valley region, which encompasses 43 cities and towns in the Hamden and Hampshire county areas of Massachusetts, asked LCM to redesign their aging Drupal site with a new look and feel and to also be compliant with new government regulations surrounding content and site accessibility.

Working with PVPC 

We took the project from initial discovery and strategy through information architecture, design, and development. We were able to deliver a compelling, modern, and effective design, with PVPC’s target users in mind. Our discovery and strategy informed a new design for improved site navigation and menu structure, re-working the existing navigation system to create a more fluid experience visiting the site.

Expand
Catalog integration for Queens Library.

Processes
  • Continuous Delivery
Team Leadership
  • Senior Producer
    Kelly Albrecht
  • Senior Architect
    Kelly Albrecht
  • Senior Development
    Kelly Albrecht

Team augmentation for increased capabilities.

Queens Library needed to integrate its developing content management system into its Book and Media Catalog systems to display realtime information and allow interaction between site visitors and its collection.

We were approached for assistance in developing the custom module foundations for these integrations.

We joined the Queens Library IT team and provided coaching as well as custom code.

Our engagement included working with in-house developers and other development teams to build custom modules, displays, and workflows to complete the integrations. Handoff of our work included training and enablement of internal Queens Library developers.

Queens Library launched its new and fully integrated website on Drupal as an interface to display realtime catalog information and facilitate customer interaction.

Expand
A Hub for Emergency Preparedness.

Processes
  • Agile/Scrum
Team Leadership

San Francisco takes emergency preparedness seriously.

As the fourth largest city in California, the city also serves as a center for business, commerce and culture for the West Coast. To support the City of San Francisco’s commitment to emergency preparedness, the Department of Emergency Management designed and developed a campaign to drive citizens to better understand how to be prepared in the event of an emergency. And in the unfortunate event that disaster does strike, the platform transitions to a communication platform where citizens can find the most up to date information directly from the City.

DEM had invested significant effort into creating a very engaging website to communicate to the public about emergency preparedness. However, the site was developed in a way that did not facilitate quick and easy content changes - a critical need when up-to-the-minute accurate information is needed. The site also fell short on a number of accessibility metrics.

How we did it.

When it’s business as usual, the site serves as a platform to generate awareness for how someone can better prepare themselves and their family in the event of an emergency. Visitors can download checklists, and complete forms, in addition to reading about how to prepare for different kinds of disasters, like an earthquake or tsunami. However, in the event of an emergency, the City can quickly enable a separate emergency home page which presents visitors with vastly different dynamic content updated in real time specific to the emergency, including an embedded interactive Google Crisis Map that displays information aggregated from a variety of external sources managed by the City.

Last Call Media provided a direct replacement of the existing site in Drupal 8, leveraging the out-of-the box D8 accessibility features and the user-friendly D8 in-place content editing interface. We also reduced the maintenance burden by bringing the blog, which had been a separate site, into the main site.

Our accessibility audit revealed that the original color palette used for the site designed relied heavily on colors that did not meet WCAG 2.0 contrast requirements. We were able to identify a compliant color scheme that remained within the existing brand guidelines for the new site. The site also relied heavily on icon fonts which were not taking advantage of Unicode’s private use area, and the HTML elements displaying the icons did not use appropriate ARIA attributes. Rebuilding the icon font and HTML markup to take advantage of those tools helped to greatly improve the screen reader experience for the site.

Another area that needed improvement was general accessibility related to interactive elements. Sections like flyout menus and tabs were difficult to navigate via keyboard, and were missing ARIA attributes that make them easier to understand and use. During the rebuild we switched away from using mostly-homegrown CSS and JS, and leveraged the Foundation CSS/JS framework instead. This change provided a couple of benefits - many of the missing accessibility features are included in the components provided by Foundation out of the box, it helped keep the nuanced details of the styling more consistent across different areas of the site, and it expedited the development process as well.

The City of San Francisco now has a means of communicating its emergency preparedness message with a site that is engaging, nimble, and robust.

Expand
Best-in-class content delivery and caching.

Processes
  • Continuous Delivery
Team Leadership

As part of our ongoing engagement with the Commonwealth, we identified an opportunity to improve customer and constituent experience by leveraging the Cloudflare CDN (Content Delivery Network). Following the initial discovery phase, we architected, implemented, and deployed Cloudflare’s Global CDN product to give the site best-in-class content delivery and caching, while maintaining all the functionality of the previous CDN while improving development capabilities.

Creating Undetected Changes.

In the discovery phase, we reviewed the marketplace to find the most appropriate CDN for the State’s use case, balancing security, performance, and cost considerations. Ultimately, Cloudflare was selected as the best fit because of its extensive firewall and DDOS protections, and granular cache control using “Cache Tags,” which have the potential to boost performance for the constituents and reduce the risk of site instability.  

The first, and perhaps most critical concern we addressed in the course of this project was that the CDN needed to be resilient, serving pages even if the site itself was not functioning properly. For example, the development team does code releases periodically that take the backend of the site completely offline, but constituents still need to be able to access content during this time. To meet this requirement, we adjusted the site’s caching headers to include directives to serve cached responses in the event of an error response received from the origin.  As a result, constituents are able to access the majority of Mass.gov, even if a catastrophic event takes the web servers completely offline.

As a government website, Mass.gov is always at risk of attack from malicious actors. To mitigate this risk, Last Call Media undertook extensive configuration and testing of Cloudflare’s various security features, including the Web Application Firewall (WAF), DDOS protections, and custom firewall rules. We had a few hiccups along the way with configuring the security features (at one point, content authors were receiving CAPTCHA verifications when submitting their changes), but were ultimately able to work through these issues to dial in the right balance of security and ease-of-use.

Next, we implemented Cloudflare’s brand new “Workers” feature, which gives granular control over CDN functionality using a javascript “service worker.” The Worker we wrote for this project handles more than 6 million requests a day, and gives the Commonwealth the ability to test and deploy CDN level changes to development, staging, and production environments independently, making it much faster and safer to verify and release changes. The worker implementation benefits the Commonwealth giving them flexibility for the future, while also reducing cost over the previous CDN.

These workers were also integral to the success of this migration beyond what we had initially imagined. During the testing and release phases of the project, they gave us a mechanism for fixing changes that was reviewable and testable. Having a well-defined review and deployment process improved the team’s visibility into what changes were being made, and let us avoid silly mistakes. Overall, we felt the development team’s velocity was greatly improved by using this workflow.

The migration went as smoothly as possible, there were no negative results.

Mass.gov raved:

I hope you are puffed up with pride. We simply couldn’t have done any — much less ALL — of this mountain of work without you. You’ve been a rock. Well, a very hard-working and creative rock. We are so lucky to have your help.

Lisa Mirabile, Project Manager

For the future we envision phase 2 to be granular cache invalidation. This would mean when a piece of content changes, the CDN would only invalidate only that piece of content so it’s fresh in the cache. What that allows us to do is set really long cache lifetimes on the edge content. We would be able to cache pages for even a year and rely on the invalidation to make them fresh when we need to be, reducing the load on the backend servers significantly. In its current state, pages now only get cached for 30 minutes. With a longer available cache time, we’d see immediate results for less load time, it would be less costly for infrastructure, and lessen the chances for a backend disruption.

Expand
Ease-of-use API Key Management Tool.

Services
Processes
  • Agile/Kanban
Team Leadership

In order to remove the requirement of understanding AWS, LCM built a serverless API that interacts with API gateway. This allows the EOTSS team to manage API keys without having to interface with AWS directly.

The Commonwealth of Massachusetts maintains several APIs used by a variety of internal and external teams.

The APIs are built leveraging AWS API Gateway, which allows each granted user to be assigned an API key with various functionality such as rate limiting and access one or more of the state’s APIs. This system is great to work with, but requires a user who is managing the applications to be very familiar with AWS and API gateway in order to add or modify a user’s access to each API.

The application requires OAuth2 authentication through GitHub in order to access the tool, and has a ReactJS frontend that allows authenticated users to easily administer the keys.

Expand
Enabling continuous improvement by listening to constituents.

Processes
  • Agile/Scrum
Team Leadership
  • Art Director
    Colin Panetta

Feedback is of the utmost importance to the Commonwealth.

The highest priority of the Commonwealth of Massachusetts is serving its constituents as best it can. Essential to that is feedback—hearing directly from constituents about what they’re looking for, how they expect to find it, and where any improvements in that journey can be made.

We partnered with the Commonwealth to design a component for Mass.gov that would gather useful feedback from constituents, and another component that would display that feedback to all 600+ of the site’s content authors in a way that maximizes their ability to make improvements.

Watch Collecting and using feedback on Mass.gov, a session about this project presented by Colin Panetta of Last Call Media and Joe Galluccio of Massachusetts Digital Services at Design 4 Drupal below, or scroll down for our written case study about it.

Getting feedback from constituents to site authors.

Discovery

The success of Mass.gov hinges on getting the right feedback from constituents to site authors. Our first step in overhauling the way Mass.gov collects feedback was to define what we needed to know about each page in order to improve it, so we could design the feedback component around that. It consisted of the following:

  • Whether or not users found what they were looking for, and what that was.
  • Contextualize the above by knowing how satisfied users are with the page, and what they came to the site to do.
  • Very detailed feedback that could only be provided through their user panel, a list of nearly 500 constituents who have volunteered to test new features for the site.

With our broad goals defined, we wanted to make sure the feedback component was working on a more granular level as well. We conducted a series of interviews with site authors asking how to best reach their users, and gained some valuable insight. Here’s what they told us:

  • Too much information in the feedback form would scare users away.
  • Feedback was being submitted with the expectation of a response, and organizations wanted to be able to respond.
  • But, not all organizations would be able to respond, so a variety of contact options needed to be available to them.

Strategy

We combined what we learned above with our best practices to make a set of requirements that we used to define a strategy. It was immediately obvious that this feedback component needed to do a lot! And like site authors told us, if we showed that to users all at once, we might scare them away.

A sketched figure in an unsure pose stands in front of a tall stack of blocks, each labeled with a step in the feedback process.
Too many steps at once can be daunting.

So to maximize the amount of responses we’d get, we decided to lower the effort for submission by presenting these options one at a time, starting with the step that takes the least amount of commitment, and increasing with each step. So users can submit a little bit of feedback, and then opt into submitting a little more, and then keep going.

Blocks of increasins size are lined up to form steps. Each block is labeled with a step in the feedback process. A sketched figure climbs the steps.
A step by step approach can make large workflows more palatable to users.

Designing the feedback form

With a clear strategy in place, we designed the following component.

Feedback box asking users is they found what they are looking for. After question is yes/ no radio button and submit button.

On first load, the component is very simple — it’s only asking users if they found what they were looking for.

Once users have made a selection, the component expands with fields asking them what they were looking for.

Feedback form asking users if they found what they were looking for, with a larger text field below asking them what they were looking for and a radio button asking if they'd like a response, followed by a submit button.

Site authors have the option of including an alert here that tells users this form is not for urgent assistance, and directs them to a better place where they can do that.

In the above example, the organization who is responsible for this page is able to respond directly to feedback. So if users say they would like a response, a form opens up for them to enter their contact information. If the organization was not able to respond directly to feedback, a brief explanation of why would appear there instead.

 After submitting, users are thanked for their feedback. 

Website component thanking users for their feedback, offering them a link to contact the RMV, and asking if they would like to take a survey.

Seen above, organizations are given the option to link to their contact page. This is commonly used if the organization is unable to respond directly to feedback.

Users are then given the option to take a short survey, where they can provide more detailed feedback.

Survey asking for more detailed feedback from users.

After submitting the survey, users are given the opportunity to join the Mass.gov user panel. This is the largest commitment available for providing feedback, so it’s at the very end!

Component thanking users for submitting their survey, and giving them a button to press if they would like to join the user panel.

So that’s how feedback is collected on the site. But what happens to it after that?

Displaying feedback to site authors

Feedback submitted through the site can be viewed per node, i.e. a site author can go to a specific page through the backend of the site and view all the feedback submitted for that page. But a lot of feedback can be submitted for a single page, and on top of that, site authors are often responsible for multiple or even many pages. Combing through all that feedback can be a prohibitively daunting task, or simply not possible.

To help with this, we designed the “Content that needs attention” panel for site authors.

Website component titles "Content that needs attention," with description area explaining component to users, and a table displaying a list of content.

The “Content that needs attention” panel appears on the welcome page on the backend of the site, making it one of the first things site authors see after logging in. It displays the page titles of their 10 pages with the lowest scores from users, sorted by page views. By showing site authors their content that’s seen by the most people first, we’re helping them prioritize what to work on next.

We’re giving site authors additional information about the content right in the component, helping them make decisions at a glance. In addition to the aforementioned page titles, scores, and page views, we’re showing them the content type (since some titles can be very similar on this site), the date they last revised it (in case that helps them know how badly this content needs attention), and something a little surprising… a “Snooze” button!

We put a snooze button in because once site authors make an improvement to content, it’s no longer helpful for them to see it here. So, the way it works is that they make an improvement to content, then hit “Snooze,” and it’ll disappear from this list for one month. At the end of that month, one of two things will have happened: 1) the content will have improved enough to no longer appear on this list, or 2) the content needs more improvement, and will appear back on this list.

This feedback component collects around 30,000 pieces of feedback in a single month. Issues reported by users include missing or hard to find content, mistakes, or issues with the service itself. That feedback is used by Mass.gov’s 600+ site authors to continuously improve the delivery of their vital services to the constituents of Massachusetts.

Expand
Faster than lightning Addressing API.

Services
Processes
  • Agile/Kanban
Team Leadership

The Commonwealth of Massachusetts deals with addresses from many organizations, each entering addresses in their own unique way. This leads to a single address being entered potentially dozens of different ways. The Massachusetts Bureau of Geographic Information (GIS) maintains a database that uniquely identifies each individual address within the state in a consistent format, so they needed a way to take the many possible variants of each address, and reconcile them to a canonical address entry within their internal data set - which was stored in a format that is difficult to query against (FGDB). Because of the volume of data the state deals with, using an external service was outside of the allowed budget.

LCM worked with the Commonwealth to build an extract-transform-load tool (ETL) that runs whenever an updated data set is provided by GIS. The ETL takes millions of records from the FGDB dataset, normalizes them, and imports them into an AWS RDS database. The state had a proof-of-concept process for this that took hours to run. By leveraging AWS Fargate and the open-source GDAL library the required time was brought to under 10 minutes.

After the ETL, we built an API endpoint using Serverless.js (AWS) that takes in addresses in a typical mailing address format (which allows for many potential variations). We leverage libpostal to separate the address into distinct address components, and then perform a query against the RDS database to see if any matches are returned.

The serverless architecture of both the ETL and the API endpoints are highly scalable and secure, and the state is only charged while they are actually being used. This allowed us to create a flexible address matching system that took advantage of their internal dataset of unique addresses, and allowed user input in a wide variety of formats, for a very small cost.

Outcomes: The API has gone through internal testing and is fully functional and is in the process of being rolled out across the Commonwealth. 

Measurements: Delivery of most API responses in < 500ms milliseconds.