Anyone who has worked at a technology startup knows that change happens often and inevitably. One of the most exciting changes is the addition of new people. As a team grows, systems must be reworked and replaced in order to ensure that everyone is supported in their work. Code is no different; a codebase designed to support a team of ten engineers will likely need to undergo some fundamental transformations as that team doubles or triples in size.
At FullStory, we recently felt this tension arise around frontend development. We decided to provision a new team to own this problem, so we gathered Lessons learned after my first year at FullStory and set out to stabilize, standardize, and optimize frontend engineering across the organization. Thus, the App Frameworks team was born.
Right off the bat, a lot of great ideas surfaced. We built a shared library of components that other teams could use instead of reinventing the wheel. We started transitioning the app to React from a homegrown framework that had outlived its usefulness. We introduced new tools to enforce code quality, tweaked compiler options, and implemented new ways to test code.
It felt like we were making a ton of progress, and it was clear that throughput for new feature development was increasing. But, could we do better? Could we be crystal clear about the results we were getting, both to ourselves and to outside stakeholders? In a word, what we were lacking was transparency.
There were many reasons for us to invest time into improving transparency. Most obviously, measuring results would help guide future decision making. Additionally, by making our measurements easily accessible to everyone at the company, we could immediately communicate where we were spending time and the value we were creating. We could also more readily identify and celebrate wins, and we could use data to motivate other teams to buy into our projects. Most importantly, striving to be more transparent is just the right thing to do! Intentionally defining and measuring success puts accountability in the right place and goes a long way toward building trust within an organization.
From the outset, it was clear that App Frameworks wouldn’t be able to directly quantify the result of our efforts; our team charter was simply too broad. But, could we form estimates across several dimensions to paint a better picture of our impact? After all, we didn’t need to express value created as an absolute quantity; it only mattered that we could prove that we were trending in the right direction.
With this general idea in mind, we set off to take our first steps toward becoming a more data-driven, transparent team. Our goal was to build an automated system that would collect information on some regular interval. We didn’t want to negatively impact other engineers; whatever we built would have to work within established workflows. And, since we were missing historical data, we needed the ability to pull data retroactively.
Given these constraints, we built a sort of “time machine” using git. The idea was simple: we wrote a script that filters the commit log to the last commit of each day, then checks out each resulting commit sequentially. By running this script, we can travel back in time to a specific day, then play time forwards until we reach the present.
On each day that our time machine takes us to, we run various static analysis tools across the codebase, pull some data points, and associate them with the appropriate point in time. This data is then imported into a graphing utility for easy viewing (in our case, a simple Google sheet). An entire year’s worth of data can be collected in just a few minutes!
Using this technique, we ended up with a growing list of metrics that we can automatically collect over any span of time. A few examples include:
Number of Imports Referencing Shared Component Library
Purpose: Tracks adoption of one of the core libraries that App Frameworks maintains.
Method: Parse our TypeScript using
, then walk the resulting AST to find imports from the component library package.
Insight: We can clearly see that component library adoption is steadily increasing and continues to be an important asset for building UIs at FullStory. Because this library is widely used, maintaining it is probably a worthwhile investment of time.
Lines of Code in Modern vs. Legacy TypeScript Modules
Purpose: A proxy for progress of a wide-scale code migration that requires the involvement of many teams.
Method: Use
to count the lines of code in files under the appropriate directories.
Insight: It appears that the modern codebase is growing while the legacy codebase is remaining relatively constant. Although this isn’t a bad outcome, we would rather see the legacy codebase shrinking over time. This might be an indication that other teams are having trouble with the migration, and there may be some roadblocks that we have yet to address.
Not too bad for a first step! Today, there are 3 additional metrics that we collect, for a total of 5:
Number of unit and integration tests
Percentage of TypeScript modules that are linted
Number of legacy template files still in use
We plan to add a few more in the future, including cyclomatic complexity and test coverage.
Wrapping Up
And there you have it: a low cost, somewhat novel way to collect metrics about our code. Static code analysis can only tell us so much about the impact App Frameworks is having, but it does give us information that we didn’t previously know. And, best of all, these metrics are completely transparent to all FullStorians!