There were multiple information points that I came across this morning that pertained to the importance of silicon, i.e., semiconductor chips, for EdgeAI devices. A couple of them were from my early morning ritual of reading WSJ. There were two news articles:
- Apple aims for its own AI chip
- Qualcomm’s Smartphone Future Looks Brighter With AI
And behold- a few hours later, we find that Apple launched its latest iPad. It did make changes this time, but mostly to the hardware. Among those changes was the M4 chip. All this focus on the silicon on edge devices might make you think- are chips really that important for edge devices? The answer is- extremely important. The other question is- should companies make it their focus area right now? The answer to that question is- Ça dépend (it depends).
Hardware is currently the major constraint in the path of accelerating AI advances on edge devices.
AI has been on edge devices for some time now, but almost all of those capabilities were not local. Why these algorithms, specifically language models, are not local is not difficult to decipher. Edge devices currently do not have the hardware capabilities to accommodate them.
However, the imperative to build local, on-the-edge AI, has been recognized by major tech players. One focus area is obviously designing algorithms that are developed specifically for edge devices. Small language models (SLMs) have become a key focus area. It looks like Microsoft is currently in the lead in this arena. If research papers published by its scientists are any indicator, Apple is working on its own version of SLMs as well, designed for edge devices.
But what should be a priority focus area in the current state (the keyword is “current state”)? Software or hardware? That will depend on your capabilities and roadmap, or vision of EdgeAI. So, let us explore whether custom silicon should be a priority when it comes to EdgeAI.
Visit instances where innovative product companies have decided to venture into making their own silicon chips or are planning to venture. You will find that they first gained great success on the software or product side. Consider OpenAI’s interest in building its own custom chips. By this time, OpenAI has a robust understanding of what hardware constraints impact its LLMs. That knowledge, coupled with its knowledge of the path it will take to attain whatever capability it plans to attain in the future, gives it a clear blueprint of what it wants to achieve from its custom chips.
The same applies to Apple. Over a decade, Apple’s understanding of the performance constraint of Intel’s chips on its products helped it understand what type of customization it wanted. Once it got into that custom chip arena, the learning only helped accelerate development, as we have seen with M series chips, which entered their fourth version today.
That is why, if you are not a chipmaker like Qualcomm already, the right strategy at this point would be to focus on the software (unless you have been working on an edge-specific AI for a long time and have some decent level of understanding of real-world performance constraints). There is another aspect to it, which has always been more important in my world. That is, customers will interact primarily with the software. They do not give a sh*t about the hardware that powers the software. If the M series chips had not been an improvement upon intel chips, consumers would have forced Apple to drop its M series chips. What allowed Apple to build the chip that helped unleash the other capabilities in its devices is the learnings it amassed.
Large focus groups are the answer if companies do not want to wait and intend to develop hardware in tandem with software (assuming that their AI models are close to maturity in the lab). The key is to get feedback from actual users vs lab studies of constraints. The gist is that while custom silicon for EdgeAI may be imperative, if you are not in the chipmaking business, collecting learnings before embarking may be a good move.

