Engineering · 10 min read

Rust at Fullstory, part 2: A look inside our mobile SDK

In the previous blog post—Rust at Fullstory, part 1: Why rust?—we talked about the factors that led to us choosing Rust as the language for the core of our cross-platform mobile framework. In this post we'll discuss some specifics of how we use Rust to power our framework.

rust-pt1-2

The Rust logo, used under CC BY 4.0

When we began writing Rust, we set a rough design guideline that platform-specific concerns should stay in the platform-specific code, and the Rust code should encompass the common platform-agnostic parts. For the most part, this meant we could implement native components in the most straightforward way, and avoid pain where Rust didn’t have a good interface to the platform. We have a set of Rust packages (in Rust lingo, “crates”) in a cargo workspace that divide things up into logical pieces:

  • shared-futures: a combination of re-exports from futures-related crates as well as some common helper functionality around them.

  • shared-flatbuffers: the Rust code generated from our Flatbuffer definitions as well as some additional code for working with those generated types. The generated code is fairly large, and we don’t change the Flatbuffer definitions very often so having this in a separate crate helps us avoid rebuilding it more often than necessary.

  • shared-core: the bulk of the common code. This crate defines some traits as interfaces to the platform-specific parts of the framework.

  • shared-android: an implementation of the traits from shared-core and glue code using the crate to interface to Java code. We currently hand-author our JNI definitions using some macros to simplify the declarations.

  • shared-ios: an implementation of the traits from shared-core and FFI functions to provide an interface to Objective C code. We use the crate to generate a C header from our Rust function and type definitions.

  • shared-mock: For ease of development on the core Rust code, we implemented a “mock” platform that provides all the same interfaces but simply runs in a desktop environment. This allows us to develop in a pure Rust environment for many things and avoid the integration pains of our mobile toolchains.

For iOS, we have an Xcode project that builds our framework and uses an external build command to invoke cargo build for each target architecture. We build our Rust code as static libraries that we link into the final framework. We’re using the scripts written by Ditto to build a Rust toolchain that uses the same version of LLVM as Xcode does; we do this so that we can ship our framework with embedded LLVM bitcode, for our customers that need that functionality. (We would love to be using a stock upstream Rust toolchain but this is a tricky problem. We’re hopeful that the Rust project will eventually find a workable solution.)

For Android, we have a Gradle project that builds our Java code and invokes cargo build for each target architecture. We build static libraries from cargo and have a small CMake project that builds a few C stubs and links those with the Rust code; as a result, we have it produce shared libraries that we include in our Java library.

For a variety of reasons, we had been using nightly Rust but as of Rust 1.42.0 we won’t require any unstable features.  We reduced our risk from using nightly builds by pinning a specific nightly version which we only update as needed.  (We then resolve any incompatible changes as part of updating our toolchain version.)

The developer experience and tooling around Rust is one of the biggest selling points of the language, and we’ve certainly found that to be true for ourselves. Having access to the crates.io ecosystem is a huge benefit (I’ll discuss some specific highlights in the next section). Most of our developers are using the fantastic IDE support in either Visual Studio Code or IntelliJ, with a few using the Rust Language Server integration in editors like vim. Rust’s approach of following platform conventions and leveraging LLVM’s world-class functionality means that most of the platform-specific developer tooling “just works”. We’ve found that the debuggers and profilers included with XCode and Android Studio generally work fine, with only a few rough edges.

Overall we feel like the core value propositions that Rust promises (Performance, Reliability, Productivity) have proven true at Fullstory. We’re able to write performant code without spending excessive time in optimization. We don’t find ourselves tracking down weird crashes or bugs caused by data races reported by our customers in our Rust code. (We still occasionally have to deal with crashes in our Objective C or Java code…) We can confidently make sweeping changes to our codebase, and trust that the Rust compiler will point out anything that isn’t correct. And finally, we can bring in developers who are new to our codebase and know that the compiler will limit the damage that they can do by not having a full understanding of the code burned into their brains.

The Rust Ecosystem

rust-pt2-2

Ferris the crab, the unofficial Rust mascot by Karen Rustad Tölva

Any Rust developer will tell you that another highlight of developing in Rust is having access to a great selection of crates on crates.io. Cargo makes it delightfully easy to pull in crates as dependencies—while too much of a good thing can bloat your project’s build times, there really is something to be said for being able to utilize one of the world’s fastest regular expression libraries by adding just one line to your Cargo.toml! Even if there isn’t a polished, stable, full-featured version of what you need, you can often find something that gives you a considerable head start over having to write your own implementation from scratch.

For developers coming from a C/C++ background who are used to manually vendoring their dependencies this is a breath of fresh air. For developers who have used languages with package managers like Python’s pip or node’s npm this might not be as impressive, but the specifics of cargo’s dependency resolution and Rust’s ability to include multiple incompatible versions of a crate in a dependency tree give you powerful tools to work around the kinds of dependency problems you hit all too frequently in real-world projects.

Additionally, cargo’s built-in support for custom cargo subcommands has given rise to a wide range of useful tools that are just a cargo install away. We use a few of these to great effect in our development process:

  • rustfmt to maintain consistent formatting of our codebase

  • clippy for code linting

  • cargo-deny to check for security advisories in crates we depend on, as well as for validating crates’ license compatibility

We leverage functionality from quite a few external crates in our code. We use async throughout our codebase, and rely on many features from the futures crate. Because our code winds up running in other people’s apps we don’t use a full-fledged async runtime like tokio or async-rs since those are optimized for “use all available resources” scenarios like network servers or client applications. Given that one of our goals is to ensure that the Fullstory plugin has minimal overhead on our customers' applications that's not ideal. Instead, we use the ThreadPool provided by the futures crate to run tasks on an appropriately-sized pool of threads.

As mentioned previously, we use the flatbuffers crate as a data serialization layer. Having a cross-language representation is important to us as the server side that receives the data is written in Go. And like many projects, we use serde and serde_json for working with JSON in a straightforward way. (Seriously, I can’t speak enough good about serde -- if you haven’t seen how amazing serde is to use, go look at the sample code on the serde website!)

One of my first pull requests after joining Fullstory was changing our error handling from error-chain (which was great when it first came out) to the more modern anyhow for general error handling and thiserror to define some specific errors in an enum. Rust’s powerful error handling, with its Result type, is wonderful to work with. The Rust ecosystem is still iterating on what the best patterns for defining custom error types and chaining nested errors are, so I expect this space to continue to improve in the near future.

I mentioned previously that we’re using the jni and cbindgen crates to help with our FFI interfaces to Java and Objective C code, respectively. Rust has some great tooling to interoperate with other languages including incredible projects like Neon for writing node.js modules and the mind-bending wasm-bindgen for compiling Rust to WebAssembly and working with JavaScript APIs from the browser. All of this means that Rust is a great language for times where you need to play nice with other languages.

Future Directions

rust-pt2-3

"LEGO - Back to the future / Delorean"by seter82 is licensed under CC BY-ND 2.0

While we’re incredibly happy with Rust as our choice for the core of our mobile framework, it’s only a small portion of the code that powers Fullstory. We’re always thinking about ways in which we could potentially use Rust to do the things we’re doing in better ways. We certainly aren’t proposing to rewrite everything in Rust—the goal is not to write Rust for its own sake, or because we love writing Rust, but to use Rust where it really makes sense and lets us do things that we can’t easily accomplish in other languages.

Rust’s first-class support for WebAssembly as a target platform provides an avenue that we’re eager to explore. Can we reuse some of our existing Rust code in our web frontend by compiling it to WebAssembly? We plan to explore this in the near future to find out whether we can improve some performance-intensive parts of the application while also saving ourselves from maintaining a parallel TypeScript implementation of the same code.

As Fullstory continues to grow our customer base we are always mindful of ensuring that the services that power our product continue to scale along with that growth. This requires both infrastructure planning to scale out services horizontally, as well as optimization work to find places where our existing code could be improved to do the same work faster. As part of this ongoing process, we’d love to keep an eye out for places where targeted application of Rust code could improve our ability to scale.

In Conclusion

We’re confident that choosing Rust was the right decision for the Fullstory mobile framework. We recently shipped version 1.0 of our mobile SDK and are continually working to make it better serve our customers' needs. Having our cross-platform code in Rust lets us focus on the important platform-specific work that makes our product stand out instead of writing the same code over and over for multiple platforms. With Rust we don't have to worry that we're shipping security holes in our customers' apps, and we can keep the CPU and memory usage impact low without a huge amount of effort.  We expect that the Rust ecosystem will only continue to improve and pay dividends. Rust adoption continues to increase year over year as more companies discover its benefits for themselves, and we’re glad to be one of them!


You Might Also Like

Want a perfect website or app? Fullstory can help. Request a demo today.

author

Ted Mielczarek

Staff Software Engineer

Ted Mielczarek is a Staff Software Engineer on the Native Mobile team at Fullstory. Having spent his formative years at Mozilla, Ted is a Rust enthusiast and proponent of the open web.