Maturation and innovation of technology
Watches were invented around the 16th century. Initially, the concept underwent lots of experimentation with design, mechanics and aesthetic. Over time, after sufficient iteration and variation, certain designs were converged upon and became the mainstream notion of the watch. After all, a watch that runs out of battery quickly is not very convenient, nor a sundial watch useful on cloudy days or at nighttime, nor still is a pocket watch as convenient as one worn on the wrist.
Despite the design becoming roughly standard, we still see variation among watches – there are different strap styles, colours, metals, degrees of smartness and feature complexity and so on. Of course, major shifts in technology, such as the advent of silicon chips and wireless internet, allow for major changes to watches. But the real variations in design are relegated to niche markets, where hobbyists collect and design extravagant and unique watches.
This experimentation is important, because we may find a major improvement that we hadn’t yet considered. It seems more and more likely that this revolutionary change will come from an industry that’s different than the watch industry – perhaps in the form of better smartwatches, or augmented glasses that also display time, or perhaps even technological implants in our bodies. The probability of finding a radical improvement in the physical, mechanical, analogue watch, however, seems to be decreasing over time because, despite continuous attempts, no one has done it so far.
Consider another “technology” – ice cream. We still have the traditional kind which, although it has evolved in how it’s manufactured, the types of flavours, and fat content available, is essentially still ice cream. Then there are other forms of frozen dessert that have come from innovation in slightly different areas – sorbet, gelato, frozen yogurt. Still, it seems unlikely that someone will radically improve the traditional ice cream.
I suggest that innovation in many technologies works like this. Let’s look at personal computers. They were derived from massive punch card-processing computers, and first came to us in recognizable form as the IBM PC in 1981, and the Macintosh in 1984. Then these desktop PCs got sleeker and faster, the components were modularized to monitors, peripherals, and cases which contained CPUs and so on. Then the laptop was introduced and, while initially they were more like suitcases than notebooks, today they are extremely thin, light and fast. At the same time, software improvements were also happening – people introduced graphical user interfaces and improved them radically. Then video games were introduced and continually improved to the point of being fairly realistic. Over time, the core needs people used a computer for – email, office work, music and news, could easily be accomplished by most computers. While improvements for gamers and techies continued to be made, people began keeping their computers longer and longer before upgrading. Sure, we had touchscreens, and ever thinner, more efficient, faster computers, but the computer seems to have reached a mature point in its design. There will surely be some radical alterations and improvements, with 3D, haptic feedback and so on, but the innovations will have to come from the development of these other technologies, not iterations of the computer itself.
Other technologies, such as tablets and smartphones, seem to be approaching this maturity. They might get a bit faster, with better screens, longer batteries or better cameras. But major innovations will come from other technologies, such as the aforementioned 3D. In the coming weeks, I will look at some of these budding areas and see where they might be headed.