Like lots of fast growing tech startups, our data needs at Tails.com have changed significantly over the years. It's been a very fast journey where it's not always been obvious where to turn next. We've gone from a few people in a flat who all know everything which is going on, to a much larger team with very different needs. All the time the key outcome has been how to make great decisions at speed.
If you've got an infinite bank account then some of this is easy. There are loads of great providers out there who can help businesses with this - for a price. Tableau, Periscope, ChartIO, Looker all come to mind. As a startup there's always going to be things you'd rather spend your cash on than an all singing all dancing analytics solution. Here the open source community can be a huge asset, and there are some excellent tools out there if you know where to look.
This is the story of how we've done Business Analytics on a Shoestring.
Phase 1: One Mind
In the beginning there was darkness. Then you built an app.
At this stage the only things that you want to measure are the things that you're also building and developing. If you're a developer, then you're already looking at traffic to the website, you already know what's working and what's not because it's what you're working on right now. If you're not a developer, then you're probably sitting next to the developer and they've probably already told you everything you could want to know and more.
In effect the team is so small that no analytics systems are needed beyond standard development platform error logging and stats solutions. Some things you might want to consider are:
But the key thing to remember here is that you're measuring to build and debug, rather than to monitor and optimise. The tools you'll focus on are geared toward this and any "business" users are focusing on designing the business rather than measuring it.
Phase 2: You sold something!
You launched your app/website/social network/internet enabled kettle, and somebody actually used it.
Suddenly there are things to measure, things to monitor, customers to contact and accounts to be reconciled. Likelihood is that any developers on the team still have a huge pile of features still to implement (because you launched with just your MVP right?). This means no analytics platform work is going to happen any time soon. You need to use interfaces for things that already exist.
For websites and front end work there's already a clear first choice in this area which is Google Analytics. The entry level model is free, it's easy to integrate with your website and many people have a basic knowledge of how to use it.
The answer for backends is initially not obvious, but there is a clear choice. Your application will already have some kind of database driving it (whether that's SQL or NoSQL) and that's where the data is, you just need a way of getting it out. That gives you three choices:
- Your development team write SQL queries for your business users. This works in the short term, but remember, they're probably your bottleneck so any time you can save them will make your business move faster.
- You hire a business analyst to write SQL queries for you. This will happen in the future, but unless you've got one who will work for free, this is going to be a more expensive option than you need right now.
- Your business users write their own SQL. You already employ them, they're not the bottleneck and they know the questions they want to ask. As a bonus, knowing the inside of the data systems at this stage is going to make designing new features with your developers an order of magnitude more productive.
As for the useful tools at this stage it will depend on your backend database but the likely culprits are going to be:
This keeps things super-simple. No additional databases, no additional people on the team and you keep as close to the One Mind phase as possible. There's a time investment for your core business team (or at least some of them) to learn SQL, but it's an investment that will pay off
Phase 3: You got busy and hired some people
Phase 2 has worked really well and giving the people making key decisions direct access to the data has meant you've been able to optimise really fast and make your product even more awesome. This means you've been able to expand the team - and given how you solved the problems of data access in Phase 2 you've now got a few problems.
- Your CFO and Head of Product can get their numbers directly using SQL, but they need to delegate to project managers so they can focus on managing their ever growing team.
- Your new project managers don't know SQL, and their remits no longer cover the whole business and so the value in understanding the whole data structure is less than in Phase 2.
- The two above result in an emerging Data Bottleneck which is frustrating because in the not too recent past you've been living in a utopia of free data access for all.
This bottleneck can resolve it's self on one of three ways:
- People give up and start making decisions without the data they would like. It's easy to see this as always bad which is not always true. Sensible assumptions and simplifications based in fact are a necessary thing to move at speed. What is bad is if the bottleneck gets to the stage where people just guess.
- New team members up-skill and learn enough SQL. This works well if they can, and they get up to a skill level where they can be just as effective as their now managers. The risk is that they don't and that you have a semi-SQL-literate underclass, slowly editing queries which they don't understand. This results in the one thing worse than no data which is bad and misunderstood data.
- Whether or not you can up-skill your business staff to an appropriate level of SQL or whether you have to hire in specific people for the role the eventual way to win in Phase 3 is the creation of a new kind of person in your organisation. The Analyst.
Whether someone falls into the role or whether you specifically go out looking for one. The Analyst role is one that was one that was going to come into being at some point in your organisations as it grows and they fulfil a few really key roles at this stage. As the business grows these responsibilities will get shared across all kinds of teams and functions, but in Phase 3 you've only got one person - so it's worth being clear on what they're for:
- Maintaining a knowledge of the data which exists in the organisation, where it lives and how it's collected. This is so they can guide users on the data in when it's appropriate to use and when it's not. As an example, the Analyst needs to know that some data is loaded asynchronously and so has a time lag involved but all they need to communicate onward is "I've only given you data up until midnight last night because that's where I could be confident it's reliable".
- Being able to access this data and filter, aggregate or transform it into a form where business users can confidently use it - probably in the business users tool of choice Microsoft Excel, via the medium of the csv file. Remember that the reason you have an Analyst is that many of your business users can't write performant and reliable SQL (either through lack of skill, lack of practice or lack of time, which is how you got into this situation) so they can't traverse complex relational structures: Data needs to be flattened so that they can work with it.
- In the course of doing the two above tasks they will also be best placed to make sure that as the business grows you will still be able to do the above in a reasonable timescale. To start this means being the voice of analytics in the relationship with your development team. Later the line between development, data and analytics will become more blurry...
Phase 4: Things got blurry and SLOW
Up until now we've focused on the tools and skills you'll need to extract data from your production tech systems. This is great in the beginning because it minimises the number of moving parts. The very benefit of this in the beginning is what becomes it's downfall. Your business will get to a point where the analytical workload becomes as a significant (if not more) than the transactional workload (the workload coming from external users of your product). What makes things even worse is that these two workloads have fundamentally different needs. You infrastructure up to this point has likely grown using OLTP databases (OnLine Transaction Processing), which are designed to access and modify single entities or rows at very high speed. In contrast, analytical workloads are much better suited to an OLAP style database (OnLine Analytical Processing) which is optimised to process very large numbers of rows faster while trading off some of the speed possible when accessing single rows.
Both OLTP and OLAP are now somewhat oldschool terms but they remain useful concepts to keep in mind when considering the kind of tradeoffs which exist. More likely you are going to hear about:
- Clustering: By having several copies of the same data which are all kept in sync with each other you can achieve much faster read performance by spreading the work across many copies of the database. In contrast, writing becomes slightly more complex and in many cases, write loads are restricted to one master node and so performance can be similar or even worse than a non - clustered solution. This is because all nodes have to know about all writes so that the data remains consistent but it is not necessary for them to know about all reads. The way that nodes in a cluster keep in sync is a technology known as replication which can be quite useful on it's own.
- Columnar Storage: Typical OLTP storage technologies store all the attributes of single row near each other. This makes updating several properties of one thing very fast and efficient. In analytical workloads we're more likely to be interested in one property of several things: what is the average weight of all the puppies? rather than what is rover's weight and height? This difference makes it more efficient to store all of the attributes of a single type for all rows together, in other words to store columns in the table together rather than rows. This makes reads of whole rows slower, and additions of extra rows much slower but increases the speed of individual columns in isolation quite significantly.
- Data Warehousing: When talking about OLTP and OLAP we're not suggesting you replace your current production database with a different type (which would mean your application would be using a sub - optimal technology for it's needs. What you will need to consider is having separate databases for separate needs, each optimised for the kind of workload that it's going to expect. Having a database which contains data for solely analytical purposes is called a data warehouse, and they come in many different flavours which we'll cover in part two of this article, but they key point for us now is that you're going to need one, in one form of another which leads to the key challenge of phase 4: how?, or perhaps more pointedly: who?.
There are some great articles or database technologies out there. Particular favourite of mine this this one from metabase. We'll go into which choices tails has made next time, but they key question remains of who is going to set this up? Your analyst can write great performant SQL queries but may not have done any database administration before, your development team have done but perhaps never from the perspective of an analytical workload rather than a transactional one - they may also not be familiar with the kind of questions that an analyst might ask of a database. The bottom line is that this expertise probably doesn't exist in your organisation (yet), and given you need it, you've got four options:
- Buy it. As we mentioned at the start, there are several businesses out there who will handle all of this for you. This makes things easy right now but there are few key tradeoffs with this option: price (none are cheap), flexibility (although you might not care right now, it could be very difficult to tweek down the line) and importantly lock-in (if data is at the centre of your business, you will be dependant on it, and if your data lives on a third party platform then you will become dependant on them, which is a meaningful business risk to consider).
- Hire it. Given buying it looks risky and expensive, you could just hire someone with the right skills to set this up. The field of expertise we're talking about used to be very much database administration or (DBA), and is more recently grown to include technologies beyond traditional databases and is known as data engineering. Great data engineers are hard to find and also not a cheap option. Finding one that fits with your team may take a significant amount of time, and while the time will come there are other reasons that you might not go down this road yet.
- Contract it. You could contract someone into your business rather than hiring them full time to avoid the time and expense of getting someone where you might not have a full persons workload yet. Contractors or Freelancers in the area can charge some pretty eye watering day rates, but that might be the most efficient short term solution for you.
Beyond Phase 4...
It might seem that none of these options we looked at the at the end of Phase 4 look particularly easy, but it's worth taking stock of what getting this far means in your journey. In Phase 1 there was no data, no structure and initially no numbers. In Phase 4 we're looking at how to graduate from numbers and data as a sideline into numbers and data as something worth seriously investing in as a support mechanism for the rest of the business. It's also worth keeping in mind that graduating from Phase 3 to Phase 4 has only really been accessible to the small or medium sized business in living memory, as before, serious databases and data technologies were only available to the big budgets of large commercial enterprises.
On the people side, there will be all kinds of pressures to have product people who sideline in data rather than data people who partner with people around the business. If you feel this pressure there's a great article written by Brian Balfour at coelevate on The Data Wheel of Death which lays out some of the pitfalls of this.
Next time we'll look at how to make this shift into the big leagues without breaking the bank and take a tour through some of the technologies which can enable that...