So you crafted a website that would make the Father of Squarespace green with envy. The bad news? You’re not done. The good news? You’re not done. Even if your new site is a technological marvel that appears to solve all your problems, there’s still opportunity to monitor, improve and ultimately, grow your brand.
Never Stop Testing
David Ogilvy said, “Never stop testing, and your advertising will never stop improving.” The same is true for your digital experience.
Rome wasn’t built in a day. Or 10 days. Or a month – or however long it took you to edit your website. So focus less on being “done,” and more on getting into the mindset of constant evolution. Set up continuous improvement processes to optimize experiences in an ongoing manner, from initial concepts through implementation. Once you launch a site, leverage a variety of resources for quantitative and qualitative feedback measured against established goals. Data and feedback on the site experience will allow you to formulate a hypothesis on how to improve it.
The cycle of formulating and testing hypotheses informs continuous improvement of the experience and functionality. Objective findings from user testing have a unique way of aligning stakeholders and priorities and streamlining some tough conversations.
Alignment Through Insights
The first task in a website revamp? Figure out what the consumer wants. Use lo-fi and hi-fi user testing to inform your approach, along with stakeholder collaboration and formative research. Once the site launches, monitor Google analytics and use data visualization tools like heat maps. Then, direct user feedback to inform ongoing A/B testing and remote user testing to improve specific features with an informed approach.
Recently, LC leveraged In-Person Usability Testing to stitch together feedback on an assortment of features to gain a holistic perspective of the site experience. During a test, on-site participants were asked to perform "real-life" tasks. The goal of any usability test is to identify usability problems, collect data on participant performance and determine participant satisfaction with the product. It’s also an unparalleled opportunity to bring key stakeholders along in the discovery.
If a picture is worth a thousand words, watching someone succeed or struggle with an experience is worth ten thousand. Prior to testing, there were a few competing opinions among stakeholders of how users filter and sort when searching for a provider. After everyone observed first-hand how real users interacted with the site, it was clear what their main concerns were.
It’s imperative to use feedback from in-person user testing with your other input sources to see what works, what doesn’t, and how you can improve.
One Test Doesn’t Fit All
Continuous improvement isn’t just for the big fish. Problems and questions of all sizes can be addressed, leveraging a wide variety of testing methods. There are pros and cons to every approach, but there are tools you can use to gain the best insights in an efficient manner.
Testing can be used for a wide variety of needs, speeds and budgets to inform design decisions during any phase of a project. In addition to in-person testing, remote methods can reduce the time to recruit participants and run tests. You can observe users interacting with a live site or gather feedback on a prototype before the site is in development. A test can be structured to inform enhancements to anything from layout and design to navigation and information architecture.
Another method to discern the best approach is A/B Testing or Multivariate Testing of different solutions. This method compares success rates to determine the correct solutions. Testing can be set up on a live website or conducted as a preference test with off-line samples.
Know Your Audience
Targeting the right audience is also a key component of effectively validating an experience. After all, the point of testing is to get feedback on the user satisfaction and performance of a site. Be careful to understand who the target audience is for a particular question and then screen participants to ensure qualified feedback comes from true-to-life testers.
Proper recruitment is critical to the success of a testing initiative, and it’s not just about demographics. A participant’s ability to articulately and thoughtfully convey their expectations versus their experience can make or break a test.
The Bottom Line
Research has shown every dollar invested in user-centered design pays dividends in cost savings later. This is known as the 1:10:100 rule. The rule states: For every $1 spent on user research up front, $10 is saved on changes downstream during development and $100 is saved on changes in production. The reason is that the freedom to change a design is greater up front. Allotting time and resources before investment in development helps to avoid reworking later and, ultimately, to understand user goals.
Remember, while tests may vary, the results will not. So whatever method you choose, never stop improving, and your followers will thank you.
For more tips and insights on how to take your marketing from now to next, subscribe to our newsletter or contact Nicole Stone – Senior Vice President, Business Development at email@example.com or 414.270.7235.