In 1870, Jules Verne foretold the future in his famous novel, 20,000 Leagues Under the Sea. At a time when only crudely constructed vessels were being built, he spoke of a future world with submarines. Fast forward some 140 years, and IBM has acted on its own visionary dream, with the introduction of Watson. Named after the company’s first CEO and industrialist Thomas J. Watson, Watson takes customer questions, quickly extracts key information and then reveals insights, patterns and relationships from all the data its retrieved.
And that’s just the tip of the iceberg.
In recent history, what used to be the playground of only a few hard-boiled scientists, now has numerous players who’ve opened up their artificial intelligence (AI)/machine learning platforms (Google, Microsoft and a host of other players) allowing us to happily swim and muddy the waters.
With the advent of these platforms and products built around them, certain complications have arisen. In particular QA (quality assurance). Traditionally, QA follows a script based on a set of inputs with expected outputs. These are deterministic in nature and, for the most part, rigorous but fairly recipe-bound. However, now we’re dealing with systems that have very little respect for our scripts and can almost be as bothersome as some humans.
A + B Sometimes Equals D
Quality assurance (QA) is a key component of product development. Incredibly detail- and process-oriented, QA involves testing, testing and more testing to ensure accuracy of data and systems. In the not too distant future, however, there may be a lapse in the QA continuum.
Let’s consider image recognition and classification. At the moment, most systems (IBM, Google, Facebook and others) focus on nouns. They recognize objects – and some are scary in that they can string together sequences of images over time to track an object/subject. Verbs are still a bit of a problem. An image with a tiger and a deer can be classified as containing a tiger and a deer. However, most systems will not return the verb, “eat.” The tiger is about to eat the deer. Let’s imagine that this problem is solved in the near future, and that a business is built around classifying images. Now QA becomes a problem. The AI/machine learning system in question may actually classify an image differently from our test script and still be reasonably correct. Does the system fail the test in this case? Furthermore, how do we compare the results between multiple platform providers who may provide different but equally valid outputs given the same inputs?
How do we QA this mess?
The above example is trivial and the problems are going to be far more complicated in the future.
Life today is diverse, with lots of shades of gray, grey or gris. It’s realistic to imagine that, as smart as AI systems may be, they’ll encounter data they don’t know how to respond to. Systems will churn and try to recalibrate, trying to make sense of this gray data. Some systems may come to a halt or create abnormal results.
Complex artificial intelligence will require greater testing and human control. The systems are smart, but people are still smarter…for now.
QA analysts will need to become greater experts at data interpretation, taking atypical information and making quantifiable results. Only humans (at this point) understand and can extrapolate the nuances that make people individuals with sometimes, very unique answers.
Will this be the downfall for some artificial intelligence systems? Will this just be a brief interruption in the QA process or will it have a greater impact?
Ready to Join Us?
At Laughlin Constable, we’ve spoken before about creating brand experiences that are more human and personal. Our LC product development team takes artificial intelligence (AI) systems, such as Watson, to provide enhanced customer and user experiences for our clients by providing smarter buying and decision systems. Better knowing the customer is imperative in evolving business models for the future.
Fully autonomous systems provide highly intelligent educated “guesses,” backed by data. These systems – and their answers – are used in numerous industries to help people get on with their lives and make better choices, such as in healthcare and finance.
Systems and people must continue to evolve to complement each other. Lifestyles will only continue to grow in complexity – as will artificial intelligence. In 1863, Jules Verne predicted what the 20th century would look like (eerily accurate). Today, we must put on our JV moniker and stake our claim in the future, as uncertain as it may be. Are you ready to join us?
For more tips, tricks, or insights on how to take your marketing from now to next, subscribe to our newsletter or contact us at firstname.lastname@example.org or 844.LC.IDEAS.