Lenovo Accidentally Reveals Nvidia N1 Chip!

Advertisements

At CES 2025, NVIDIA unveiled its groundbreaking Project DIGITS, setting a new standard for desktop AI supercomputingThis innovative project not only highlights the immense potential of the GB10 superchip, which combines the Blackwell CPU and Grace CPU, but also signals a shift in how desktop mini-computers can evolve into high-performance mobile workstationsFollowing the successful model established by Intel's NUC, several brands have begun to re-imagine the capabilities of compact desktop systemsNVIDIA has also announced plans to collaborate with third-party partners on Project DIGITS, raising the exciting possibility that the GB10 superchip could soon be integrated into various other product forms.

What came as a surprise to many, however, was just how fast this "future" is arriving.

Recently, a job posting from Lenovo circulated online, which, while not particularly newsworthy in itself, contained a job description that sparked significant interestIt specified that the position would be responsible for the hardware design and development of a new generation SoC, identified as NV N1x, within the company.

Indeed, NVIDIA's new chip had been "leaked" in a rather unceremonious fashion.

But just how powerful is the NVIDIA N1 chip?

While neither NVIDIA nor Lenovo has provided extensive details about the N1 chip, enthusiastic netizens have succeeded in uncovering some basic information about the N1 seriesTo begin with, the "N1" processor is based on the ARM architecture and is segmented into two main variants: the high-end N1x and the mid-range N1. It is also possible that the N1 series could offer sub-models catering to other tiers of performanceIn terms of manufacturing, NVIDIA's N1 series is produced using TSMC’s 3nm process technology, with design collaboration between NVIDIA and MediaTek, and is built on the Blackwell architecture.

Here, it’s important to note that the N1 series can be seen as a sibling to NVIDIA's GB10 superchip

Advertisements

However, in terms of performance output, laptop models, compared to desktop miniature computers, tend to face limitationsThis likely explains why the N1 chip isn’t a complete 20-core version of the GB10 superchip, which is consistent with its theoretical performance rating of just 180-200 TOPS.

Yet, if we compare the computational prowess of mainstream AI PCs on the market, it becomes apparent that the N1 chip holds its own as a formidable contenderMost AI PCs that utilize a combination of CPU, NPU, and integrated graphics typically do not exceed 50 TOPS in computational powerIn contrast, the N1's performance closely rivals that of previously released dedicated gaming laptops with RTX graphics.

It has been confirmed that the NVIDIA N1 chip will debut in Lenovo's new convertible laptop, slated to be showcased at the Taipei Computer Show mid-year, with an expected release in Q4 of 2025. Given the high price of NVIDIA’s Project DIGITS, estimated at a staggering $3000, it is anticipated that laptops featuring the N1 chip will also carry a premium price, likely exceeding several thousand dollars.

Could this be the dawn of a transformative era for AI PCs?

While the high price tag associated with the N1 chip will likely restrict its adoption to a more niche market—industry data from Lottu Technology indicates that the average online price for laptops in mainland China is around 6472 yuan, while the average price of the top ten best-selling laptops on Amazon stands at approximately $653—it's important to note that very few consumers are willing to invest upwards of $1000 for a high-performance laptop.

However, the introduction of the N1 chip is highly significant for the long-term development of the AI PC industryFirstly, the emergence of such high-performance AI PC components can drive a transformation within the AI PC software ecosystem from a hardware perspective

Advertisements

At CES 2025, numerous brands highlighted local AI capabilities as a primary focal point for the coming yearUnlike the mainstream hybrid AI solutions on the market, edge AI indicates that AI PCs must possess sufficient local computational power to manage extensive AI tasks, rather than relying on cloud servers.

The arrival of the N1 chip thus lays the groundwork for local, edge-based AI computations.

Unquestionably, high-performance ARM-based AI PCs, especially those that leverage the capabilities of the N1X, pose substantial competition to existing AI PC workstations, such as those utilizing Intel's Xeon W-seriesFeatures such as greater hardware integration, improved AI interface support, and the ARM architecture's hallmark energy efficiency present new opportunities for edge AI PC workstations.

For instance, in the realm of sports broadcasting, more powerful AI mobile workstations could drastically reduce the need for a multitude of high-performance servers on siteEvents like Formula 1, which operate on a global scale, could benefit from a decrease in heavy equipment, leading to lower logistics costs and faster operational transitions.

Furthermore, the N1 chip supports Windows on ARM (WoA), which could stimulate the development of the ARM PC ecosystem and capitalize on the advantages of low-power local AI inference and real-time model processingThis integration can accelerate advancements across the entire WoA ecosystem, and hardware technologies developed based on ARM architecture SoCs can quickly be repurposed for use in smartphones, tablets, and other mobile devices using ARM architecture, igniting a race for low-power high-performance solutions.

From the perspective of the AI industry, the introduction of local high-performance AI workstations also signifies a pivotal turning point for AI PCsPreviously, mainstream laptops lacking dedicated graphics card capabilities typically offered computational power below 50 TOPS, necessitating a connection to cloud services for their primary AI functions

Advertisements

The advent of high-performance AI PCs suggests that the era of relying on cloud resources to bolster local computational power is behind us.

For the average consumer, the experiential difference between cloud-based and local computational power may not be overly significantHowever, for enterprise users—particularly within sectors like healthcare and finance, which prioritize information confidentiality—edge-based high-performance AI PCs are essential for executing deep learning inference or data processing in secure local environmentsThis allows sensitive industries to leverage AI technology while maintaining control over their data.

In times of inadequate local computation capabilities, these industries often resort to deploying their private edge AI servers to construct a controllable AI computational environmentNevertheless, from the perspectives of setup barriers and maintenance costs, high-performance AI PC workstations present a much more cost-effective solution.

The coexistence of local and cloud computing is likely to endure in the long run.

In conclusion, large-scale model training and extensive data processing will continue to necessitate support from supercomputing clustersHowever, the emergence of high-performance AI PCs extends the boundaries of endpoint computing capabilities, reducing dependence on cloud resources while broadening the spectrum of AI applications accessible to enterprise users.

As the demand for mobile high-performance computing persists, it is assured that under the guidance of the industry consensus on developing edge AI solutions for 2025, we will witness more high-performance commercial AI solutions akin to the N1 emerge in the futureAdditionally, the trend towards localizing AI computing power is set to stimulate enhancements in AI performance within consumer-grade devices from a software ecosystem perspective.

It is noteworthy that as cloud AI computing became synonymous with the AI PC label, the PC industry has inadvertently fallen into a pattern of homogeneous AI use cases

Advertisements

Advertisements

Share:

Leave a comments