Sexy technology gets you through the door; the business case puts you in every store.
Anyone who has looked at deploying AI at scale within their business knows that it can get very expensive very quickly indeed. Whether you are building your own infrastructure or leasing time, space and processors from Amazon or Alphabet, costs can very easily grow exponentially and risk killing a project before it has begun to return any benefits.
When we began our Computer Vision journey four years ago we knew that controlling costs was the key to achieving scalability. Sure, reliability and accuracy were important factors, but achieving perfection in a single store trial won’t get your system widely deployed if the costs don’t add up.
Efficiency as philosophy
Our Chief Technology Officer, Abhijit Sanyal, is a fascinating person. Every year he takes time off and goes completely off-grid, hiking in the mountains or trekking through the jungles of northern India. His favourite journeys take him to Arunachal Pradesh, a state known for its greenness, but also for its lack of trekking infrastructure. The rules here are very simple - leave nothing behind but footprints. This means you must meticulously plan everything that you take with you and bring back as little as possible, carefully consuming your precious resources along the way. There is a kind of discipline to this that cannot easily be cast aside. Planning a ten-day trek can take many months, alongside the physical and mental training to cope with the rigours of what is to come. Once the discipline has been subsumed into a state of mind it becomes a philosophy that can be directed to other areas of work and life.
Abhijit has inculcated this philosophy into our engineering teams. Once a system is working properly it is continuously examined to identify any processes that can be pared away, whether any algorithms can be re-written or enhanced, and whether the architecture of the underlying models can be improved, all to reduce processing power required by deployed systems. It seems like a minor technical detail, but all of this work means that many of the applications that run on the SAI platform can operate in-store without the need for a Graphics Processing Unit, or GPU. And this can make or break the business case.
The threat of technological laziness
There is a lazy assumption in the tech industry that artificial intelligence systems can only run efficiently if they use GPUs. Mandating this kind of specialised device has a few important effects:
- Firstly, it places a premium price on the hardware. GPUs are expensive. There is also an argument that as they are heavily used in mining cryptocurrencies their price tends to closely follow the volatility of these markets, peaking when coins are at their most expensive and then suddenly crashing as bubbles burst. This is not the way to build business critical infrastructure, but a strong, long-term relationship with a vendor helps.
- Then of course you need to factor in the extra running costs. GPUs draw a lot of power, but they also generate lots of heat - 60% of the GPU chassis is taken up by cooling fans. Depending on where the servers are kept there may be a need for extra cooling systems to dissipate the hot air moved by the cooling fans.
- There is lots of discussion in the AI development community that reliance on high-end processors allows code to become bloated and disincentivises good software development lifecycle practices. In production systems this can mean that systems actually degrade in performance over time, although this remains an area of research and conjecture.
- Finally, AI development is at a relatively early stage - akin to early deployments of desktop software in an office setting when software was closely coupled to the hardware it ran on. Early versions of Microsoft Windows, for instance, were designed to run exclusively on Intel processors. In the same way, a lot of AI software today tends to be developed to take advantage of particular GPU capabilities. The decoupling of AI software from processing hardware is on-going work that is in its infancy and could take some years to make it to mainstream applications.
There is definitely a place for GPUs within the SAI Platform’s portfolio. Applications that stitch multiple camera feeds in order to analyse a contiguous view of a store would be impossible without incredibly fast processing. These kinds of systems protect a shop from the kinds of shelf-sweeping theft that can only be tackled with early identification. The ROI for these systems is compelling, even with elevated hardware and running costs.
Our common-sense approach to matching system requirements to the value of the problem the AI is trying to solve means that we can meet our customers’ needs without running afoul of procurement managers. We have done all the hard trekking so that you can just admire the views.