The Thrive Platform integrates a number of features relevant to collecting honest, valid data, storing and analyzing that data, and visually representing the data in order to turn it into actionable intelligence. We’ve worked hard to offer flexibility in scheduling and designing that isn’t neatly available in single solutions. We’re proud of this, and will continue to push for ease and usability.
However, for every kind of input we wish to capture (exs., multiple-choice, scale, and essay responses from humans; input from APIs), and for every kind of representation or output of that input (graphs and charts, word clouds, summarization of simple responses), the difficulty in designing experiences and interpreting results can increase. All is not lost, however, and you can use some relatively simple guidelines, as well as an approach based on curiosity, and get 90% of the value you’d receive from a million-dollar consultancy project.
Let’s take a look at one type of chart, time-series, and see what our eyes and that massive pattern-recognition in our skulls can do for us.
Assume that the graphs below represents responses that have been coded as belonging to Model X. Model X could be a single question, such as a query about an employees confidence at that moment, or it could be a composite made from several questions aimed, designed to tapping into a broader, more robust individual or groups measure. At five points in time we have assessed the individual’s/group’s model scores. For our purposes, let’s assume the time-range is short to medium-length (weeks to a few months) and is not long-term (years to decades).
For the chart above, the first thing we might notice is the noisiness of the data. There are certainly cyclical patterns of behavior that might fit this chart, however, for the majority of cognitive-behavior measures of interest to business this seems to contain unnecessary “jitter”, especially over a short-to-medium time-frame. For example, are you giving and rescinding promotions on a weekly basis? I doubt it, so I would doubt that an employee or group would respond in drastically different ways upon each assessment.
Solution: Change your question(s) and see what happens. If after some tweaks of your model your employee(s) is/are still bouncing back and forth, time to schedule a meeting to see what’s up directly.
Next, the above chart is one that is, unfortunately, more realistic and more common. What we see is a steady decrease across measurement points, which will hopefully bottom out at some point. This is an extremely common pattern resulting from short-sighted change initiatives, morale drops after leadership changes and mergers, and retention rates after workshops.
Solution: Provide boosters at different points in time to sustain the positives from change. Reinforce learning over time.
The above chart is the flip of the previous one, and shows a pattern that would make any McKinsey consultant proud. Up and to the right. We can see that there is perhaps an asymptote, or a stabilization at an upper-bound (here at “7”), which is common with biological systems, like humans. We can get stronger, but will probably never be able to lift a pick-up truck.
Solution: No Need! Good things. What you can do is try to look for other variables related to your model over that time period. Because you have found away to increase one variable, it may provide an actionable lever for those related variables (e.g., traditional KPI).
Finally, here is a cool pattern that maybe isn’t as recognizable. Many of our programs are developed to clarify and align concepts and processes that seem to be a little fuzzy and out of whack. Often, when you elicit responses from a group, and then show them how they may be performing sub-optimally or out-of-sync, you see an immediate drop in confidence, belief, or understanding (as we see at the 2nd time-point).
This isn’t necessarily a bad thing, however, and strong individuals/groups who we have worked with demonstrate the full pattern above. That is, a momentary set-back is accepted and overcome, with scores finding their way back to where they started. This is an example of resilience, or grit, at work.
There are countless ways of measuring, analyzing, and visualizing data, to the point that it feels like we all have to be a little data scientist/a little experimentalist/a little statistician. While this isn’t true, getting started can be daunting. As we saw in our example above, however, we have a lot of the ability to interpret patterns of responses right in our heads. And, we now have the tools on our desks and in our hands.
At Thrive we are continually working to make it easy for you to move quickly, with light-weight programming and tech, to understand yourself and your organization better. From there, we hope you have better days.
Thanks for your time — JZ
The Advisor Effect: Offering Help, Helps the Offerer
Would you help someone you didn’t like? In Benjamin Franklin’s autobiography he tells of a…