PRODUCT DESIGNER
2016
Outcome: PaaS (Launch 2017)
Resolver Inc. is a leading risk management software provider, and is currently transitioning its line of products into a single, unified platform called Core.
Core was conceived as a platform that could host a range of applications that span the risk spectrum, from planning to response. The platform needed to be a highly-configurable base for a range of disparate apps to input large amounts of corporate data into, as well as a robust reporting and visualization tool. I was the lone product designer on the project.
For this case study, I am going to focus on our first pass at creating a reporting mvp for the data that gets ingested by the Core platform. Reporting is a key tool that touches all our users and nearly all of our personas, and so finding a solid first step was going to be crucial for a feature that will always be growing alongside the platform itself.
the idea
One of the largest early initiatives revolved around reporting. Core has a tremendously versatile forms engine that allows for copious amounts of data to be input into the system, but that processes needed to be paired with an equally powerful reporting engine. The goal was to develop a quick mvp reporting arm to Core, and to quickly iterate and improve upon that system.
the challenge
One of the most powerful aspects of Core is that the data housed within it is not based on a hierarchical structure, but rather as a part of a relationship model. That means that, by design, Core has no sense of what data is a subset of what other data, but rather only knows what piece of data is directly related to what other piece (or pieces) of data. To make matters more complex, each piece of data is defined by the user, not Resolver, so all that Core knows is that this is a piece of data, not that it's the name of an audit process or a suspect under investigation.
So, the first problem to solve, before we even broke out into user needs and features, was how to give order to this data so that it could be reported on in the first place.
data definitions
The Core project was centred around Objects (anything from a witness statement to a process that takes place at a building) and Relationships (any user-created link between two Objects, from relating a witness to a witness statement or a building to a process at a building). That means that the platform already had a notion of what objects were related to each other, even if it didn't know what the 'parent' or 'child' relationship was. So, the first part of organizing the data into a reportable structure was to pull out the objects that needed to be reported on, based on their relationships to one another. We could figure out the hierarchy later.
My first instinct was to create a wizard based around a series of dropdowns that would allow the user to multi-select Objects to include in the data set of a report. This UX, though, was cumbersome and hard to orient within the larger data model.
From there I evolved into something more graphic, something that better represented a map of the data model that was being reduced into a sortable pool of data. So, your anchor Object might be an Investigation, and from that Investigation you might want to pull in related data from an investigation like Witnesses (and their related Witness Statements), Suspects, Involved Vehicles, and Locations. However, you may be less interested in data like the Officers that first hit the scene, so they would not be added to the collection of Objects to be reported on.
That structure, which was evolved upon many times during the creation of the reporting feature, gave us a starting point to begin building requirements around because it was a testable organizing structure. Feeling confident as a group that we could make logical sense of our data allowed us to move forward in designing this feature.
early sketch showing an anchor object spider-ing off in the data model
early sketch for how to intelligently link data
user needs
We had an anchor company that was driving production of this feature. This customer, a Fortune 100 corporation, had very specific needs as it pertained to the output of Reporting. We were, in essence, tasked with recreating the kinds of reports they were currently using internally so that a transition to the Core software could happen smoothly.
As a product team, we worked with the customer to identify not only what data needed to be presented (and how), but also what the meaning of the data was to them. While they had a clear idea of what they believed they needed, it was important to us to make sure that was based on actual need, and not based on the limitations of the software and processes that had existed at the company and become calcified.
This process allowed us to learn what was truly a 'need', and what could be represented another way. From here I could do quick sketches that represented our ideas and have them validated by the customer before a single line of code was written.
teamwork
Once we knew how we were going to corral our data and what the customer needed from that data, we needed to gather together as a Product team to make a plan. That meant refining the data definition idea, determining what kinds of report visualizations were in scope for the mvp, and how to genericize this feature so that it would fit the broad needs of reporting for more than just our anchor customer.
This isn't the ideal way to create a feature. The ideal would have allowed us to gather a wider range of user needs at the outset, really figure out what problem we were solving for them, and design the feature around those problems. However, this feature was unique for several reasons. First, we had a customer waiting for this feature, and money tends to drive these decisions. Second, the idea of reporting on data was going to be table stakes for our software, so it wasn't like we were operating from a place of ignorance in that regard. Lastly, we had legacy software that was in wide use, all with fairly mature reporting engines, and so we had a fairly good amount of data about how and why reporting is used in software like ours.
Nonetheless, we had to proceed now carefully. We had to use what we knew from our old software and our customer, but we also had to be future-looking. We were building new software for a reason, and that was to move the company ahead, and that means exploring new uses for reporting that could not only satisfy existing customers, but also grow to incorporate things like platform-generated reports that are created based on artificial intelligence determining that certain data required reporting on. It look time with Product Manager Brad Fillion to really balance our backward-looking and forward-looking approaches to our reporting feature.
wireframing and prototyping
After assessing what the feature would include and why, it came time to wireframe, prototype, and generate feedback.
I prepared all of the wireframes, Invision prototypes, and gathered all of the feedback that this process generated. We learned a lot during the first pass at presenting our idea outside of our little bubble. There was a tremendous amount of input from various stakeholders, mostly around the fact that we were asking for a lot of unnecessary configuration from the end user, which included determining who could access the reports and how they could be configured by a viewer. We made early assumptions that a thorough configurability would be wanted, but it turned out that so many configuration options confused the user, and so we would eventually streamline before handing the project over to development.
Final
It was instructive to finish the developed code and able to test our reporting solution with real, live data. The one thing our prototyping process cannot replicate is how our designs will behave with the kind of data that the system has in place to work with.
We put our final solution down in front of our App Managers (those that are responsible for building the applications that sit on top of the Core platform) and the critiques were fascinating. The power we put at the fingertips with reporting was undeniable. It became immediately clear that there was an ability to do things with this system that our legacy software could only dream of doing. However, it also became clear that we had work to do to further refine the intuitiveness of the feature. Once people were walked through the features they were able to create fairly impressive reports, but it took more explaining that we wanted for our users to get there.
The Data Definition proved to be the most divisive feature. While there was a general understanding of why it needed to exist, it achieved only half of its intended purpose, which was to refine the data set that would ultimately be reported on. The other half, the part where it represented a logical visualization of the data model, proved to be a work-in-progress. Finding a simple way to visualize a data model is a problem that we are going to focus on in version two, but it was helpful to know that the data definitions worked from a functional perspective, and now we just needed to improve the UI to make them more usable.
what's next?
For version two, aside from the aforementioned data definition improvements, we are also going to be focusing on creating more visualizations for reports, as well as creating an ability to take visualizations from reports and add them to a user's dashboard, two of our most requested follow-up features. The mvp did what it was supposed to do, though, which was to start us down the road on a feature that will be in never-ending iteration as the use cases grow and the technology that we leverage for Core evolves.