The SaaS business model is incredibly seductive.
One product, with one codebase, solving the same problem for hundreds of thousands of businesses. No scaling issues, no custom development, and a snowball effect if you get it right.
And the best thing is you can have a modestly sized team to beat companies 100 times your size. The barrier to entry has never been lower.
There are, of course, plenty of pitfalls. And like most products Twine has fallen into most of them. This time last year we had some serious flaws holding us back:
- Wasting time and money building and maintaining features that nobody wanted
- Flakey features: new releases that were buggy and messing up other, once solid, features
- Development time being sucked up by bugs and a creaking deployment process
Combined, this was not helping us be the nimble, user centred, hot-shot challenger that we wanted to be. We just weren’t moving fast enough, and we weren’t focused on where we wanted to be.
A year on, that’s no longer the case. We know where we want to be and we know we can get there — fast. This post touches on some of the things that have got us to this position.
The three things we changed
Looking back, there are three tangible changes we’ve made that have got us into this position.
- A new toolset and attitude to help us know what our customers actually *need*
- Reusable components, meaning we can make high-quality features, fast, with limited resources
- All-new infrastructure so we can test and ship new features with little technical overhead
These changes meant that we can now do a lot with relatively little, and make it count. Being bootstrapped, that really matters.
Reusable components mean fast, robust features
We’ve increasingly committed to reusable components in Twine.
This started with things on the front-end like switchers, buttons and layouts. Pretty standard stuff.
But last year, we took the plunge into making a key part of Twine, the data table, and making it reusable in terms of functionality.
Before we tackled this, each data table in Twine (pictured below) was independent of another, despite performing an identical function.
So, whilst overhauling our centrepiece feature, the Knowledge Base, we started from scratch and created a new component with extra functionality (bulk select/delete, inline editing, drag and drop reordering). This new data table was then rolled out out around the whole of Twine (about 40 places in all).
It was a big job, and not without its risks. Ripping out 35+ pages of our application was a daunting process, but it has paid dividends. It’s allowed us to:
Speed up development
A majority of new features we have created use this data table component — it’s basically the go-to for any settings page. Making it reusable has reduced development time of this new features massively. I’ve got our CTO to write a few lines about it:
“If the product is the tip of the iceberg, the codebase is what lies beneath. It is humungous, and at that scale each line of code needs to work hard and not be duplicated. Building a machine that creates and configures data tables means we can keep adding new functionality without polluting the codebase, and prevent constantly reinventing the wheel. It’s features like this that allow us to get back to developing improvements that our users can actually benefit from.”
Speed up design
It also makes wireframing new features a doddle since much of the time features reuse common elements.
Using a combination of dev-tools, screenshots, Milanote and a light touch of Photoshop we can mock up new features really fast, and get them into development as soon as possible, where we can make tweaks in the browser if something doesn’t look/work right.
This change was all about speed and efficiency. If we want to move faster than our competitors, it’s imperative that we don’t waste time on repetitive tasks. Reusable components help us do this.
Fast, no hassle deployments
We can’t afford to suck up time doing routine tasks like deployments.
Overhauling Twine’s infrastructure was a big job, but the way it has influenced the product is much more than page load speeds and Docker (that’s for another post).
The big change, product wise, is that we can now deploy a new version of Twine in about ten minutes. That’s compared to the multi stage, one hour process that was in place before (a process that involved an actual egg-timer).
Obviously, this makes the whole process of deploying new features way faster, not to mention the mental health of our dev team. But it has also changed our attitude to how we develop new features.
Now that deployments are no biggy, the small details that are picked up during QA are suddenly worth dealing with. Before, it wasn’t really worth the hassle of redeploying for the sake of a text change or a padding issue. Now it’s no problem.
What the new infrastructure has given us is a workflow that produces higher quality features that get to customers faster.
Making the right features with the help of a new toolset
We want to make sure that we are making the right features. Features that will delight our current users and win us new clients.
We’ve made big gains in the last 12 months by focusing on the following:
Knowing what our core feature set is
Google analytics, combined with Data Studio has given us a fantastic insight into what our clients actually use. Our four ‘core’ features can be seen clearly when plotted like this:
As you can see, we’ve got a load of features in Twine, but users are actually only interested in a few of them. We’ve been using stats like this to dictate the direction our Twine takes. This post from Intercom is a great in-depth guide to this process, and their blog as a whole has proved a valuable resource.
Weekly feedback sessions from sales have also become essential. There is no getting away from the fact that your sales people are the ones on the frontline, listening to customers and getting their candid feedback.
Paul Adams (again, Intercom) sums this up really well in his talk at UX London.
Fast and transparent support
To the client, support just needs to be fast and accurate. But behind the scenes we want it to be visible to everybody, including developers. We’ve done this by switching to a combo of Intercom and Jira — we use a lovely extension that lets our developers see the conversation between the client and support agent, and allows the support agent to see the bug progress through the development stages.
This transparency has ultimately meant faster bug fixing and better communication with customers. Seeing the transcript of a real person being inconvenienced by a bug is a really good addition to tickets; they are no longer tasks to be done but a real life problems to be solved.
Bugsnag also deserves a shout-out here. We often see our customers have run into a problem before they report it to us. Dreamy.
The impact on the product
I appreciate the above is quite heavy on tools and tech, and doesn’t touch on marketing, strategy, hiring and other high level decisions that have been made. Frankly though, there are enough posts out there about these topics and I find that grisly details are more interesting. Here are some more:
- Since January, we’ve cut time spent deploying updates down by about 400%.
- Since the data table component, we’ve released 6 brand new data tables for 4 new features. They were a breeze to implement.
- Since our new ‘operations toolkit’ we’re hitting an average response time of 26 minutes, plus getting much more transparency between our development team and our users.
- We’re averaging a rating of 4.5 stars on G2Crowd and our clients are appreciating the recent progress on our core features.
- And in the last 6 months, we’ve doubled our MRR 🎉
But perhaps more importantly, we’re clear on what we are, where we’re going and now we’re are able to get there fast. That’s an exciting place to be.
And by the way, if you want to join us on this journey, we’re soon to be looking for our first round of investment on Seedrs. We’ll open this in a couple of months. Follow me on Twitter for updates.