Hubbry Logo
NvidiaNvidiaMain
Open search
Nvidia
Community hub
Nvidia
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Nvidia
Nvidia
from Wikipedia

Nvidia Corporation[a] (/ɛnˈvɪdiə/ en-VID-ee-ə) is an American technology company headquartered in Santa Clara, California. Founded in 1993 by Jensen Huang (president and CEO), Chris Malachowsky, and Curtis Priem, it develops graphics processing units (GPUs), systems on chips (SoCs), and application programming interfaces (APIs) for data science, high-performance computing, and mobile and automotive applications.[5][6] Nvidia is considered part of the Big Tech group, alongside Microsoft, Apple, Alphabet, Amazon, and Meta.

Key Information

Originally focused on GPUs for video gaming, Nvidia broadened their use into other markets, including artificial intelligence (AI), professional visualization, and supercomputing. The company's product lines include GeForce GPUs for gaming and creative workloads, and professional GPUs for edge computing, scientific research, and industrial applications. As of the first quarter of 2025, Nvidia held a 92% share of the discrete desktop and laptop GPU market.[7][8]

In the early 2000s, the company invested over a billion dollars to develop CUDA, a software platform and API that enabled GPUs to run massively parallel programs for a broad range of compute-intensive applications.[9][10][11] As a result, as of 2025, Nvidia controlled more than 80% of the market for GPUs used in training and deploying AI models,[9] and provided chips for over 75% of the world's TOP500 supercomputers.[1] The company has also expanded into gaming hardware and services, with products such as the Shield Portable, Shield Tablet, and Shield TV, and operates the GeForce Now cloud gaming service.[12] Furthermore, it has developed the Tegra line of mobile processors for smartphones, tablets, and automotive infotainment systems.[13][14][15]

In 2023, Nvidia became the seventh U.S. company to reach a US$1 trillion valuation.[16] It became the first company in the world to surpass US$4 trillion in market capitalization in 2025, driven by rising global demand for data center hardware in the midst of the AI boom.[17][18] For its strength, size and market capitalization, Nvidia has been selected to be one of Bloomberg's "Magnificent Seven", the seven biggest companies on the stock market in these regards.[19]

History

[edit]

Founding

[edit]
The Denny's roadside diner in San Jose, California, where Nvidia's three co-founders agreed to start the company in late 1992
Nvidia's former headquarters which was home to the company through most of its pre-AI period (still in use)
Aerial view of Endeavor, the first of the two new Nvidia headquarters buildings, in Santa Clara, California, in 2017
Entrance of Endeavor headquarters building in 2018

Nvidia was founded on April 5, 1993,[20][21][22] by Jensen Huang, a Taiwanese-American electrical engineer who was previously the director of CoreWare at LSI Logic and a microprocessor designer at AMD; Chris Malachowsky, an engineer who worked at Sun Microsystems; and Curtis Priem, who was previously a senior staff engineer and graphics chip designer at IBM and Sun Microsystems.[23][24] In late 1992, the three men agreed to start the company in a meeting at a Denny's roadside diner on Berryessa Road in East San Jose.[25][26][27][28]

At the time, Malachowsky and Priem were frustrated with Sun's management and were looking to leave, but Huang was on "firmer ground",[29] in that he was already running his own division at LSI.[26] The three co-founders discussed a vision of the future which was so compelling that Huang decided to leave LSI[29] and become the chief executive officer of their new startup.[26]

The three co-founders envisioned graphics-based processing as the best trajectory for tackling challenges that had eluded general-purpose computing methods.[29] As Huang later explained: "We also observed that video games were simultaneously one of the most computationally challenging problems and would have incredibly high sales volume. Those two conditions don't happen very often. Video games was our killer app — a flywheel to reach large markets funding huge R&D to solve massive computational problems."[29]

The first problem was who would quit first. Huang's wife, Lori, did not want him to resign from LSI unless Malachowsky resigned from Sun at the same time, and Malachowsky's wife, Melody, felt the same way about Huang.[30] Priem broke that deadlock by resigning first from Sun, effective December 31, 1992.[30] According to Priem, this put pressure on Huang and Malachowsky to not leave him to "flail alone", so they gave notice too.[31] Huang left LSI and "officially joined Priem on February 17", which was also Huang's 30th birthday, while Malachowsky left Sun in early March.[31] In early 1993, the three founders began working together on their new startup in Priem's townhouse in Fremont, California.[32]

With $40,000 in the bank, the company was born.[29] The company subsequently received $20 million of venture capital funding from Sequoia Capital, Sutter Hill Ventures and others.[33]

During the late 1990s, Nvidia was one of 70 startup companies pursuing the idea that graphics acceleration for video games was the path to the future.[25] Only two survived: Nvidia and ATI Technologies, the latter of which merged into AMD.[25]

Nvidia initially had no name.[34] Priem's first idea was "Primal Graphics", a syllabic abbreviation of two of the founders' last names, but that left out Huang.[34] They soon discovered it was impossible to create a workable name with syllables from all three founders' names, after considering "Huaprimal", "Prihuamal", "Malluapri", etc.[34] The next idea came from Priem's idea for the name of Nvidia's first product.[34] Priem originally wanted to call it the "GXNV", as in the "next version" of the GX graphics chips which he had worked on at Sun.[32] Then Huang told Priem to "drop the GX", resulting in the name "NV".[32] Priem made a list of words with the letters "NV" in them.[34] At one point, Malachowsky and Priem wanted to call the company NVision, but that name was already taken by a manufacturer of toilet paper.[26] Both Priem[34] and Huang have taken credit for coming up with the name Nvidia,[26] from "invidia", the Latin word for "envy".[29]

After the company outgrew Priem's townhouse, its original headquarters office was in Sunnyvale, California.[29]

First graphics accelerator

[edit]

Nvidia's first graphics accelerator, the NV1, was designed to process quadrilateral primitives (forward texture mapping), a feature that set it apart from competitors, who preferred triangle primitives.[26] However, when Microsoft introduced the DirectX platform, it chose not to support any other graphics software and announced that its Direct3D API would exclusively support triangles.[26][35] As a result, the NV1 failed to gain traction in the market.[36]

Nvidia had also entered into a partnership with Sega to supply the graphics chip for the Dreamcast console and worked on the project for about a year. However, Nvidia's technology was already lagging behind competitors. This placed the company in a difficult position: continue working on a chip that was likely doomed to fail or abandon the project, risking financial collapse.[37]

In a pivotal moment, Sega's president, Shoichiro Irimajiri, visited Huang in person to inform him that Sega had decided to choose another vendor for the Dreamcast. However, Irimajiri believed in Nvidia's potential and persuaded Sega's management to invest $5 million into the company. Huang later reflected that this funding was all that kept Nvidia afloat, and that Irimajiri's "understanding and generosity gave us six months to live".[37]

In 1996, Huang laid off more than half of Nvidia's employees—thereby reducing headcount from 100 to 40—and focused the company's remaining resources on developing a graphics accelerator product optimized for processing triangle primitives: the RIVA 128.[26][35] By the time the RIVA 128 was released in August 1997, Nvidia had only enough money left for one month's payroll.[26] The sense of impending failure became so pervasive that it gave rise to Nvidia's unofficial company motto: "Our company is thirty days from going out of business."[26] Huang began internal presentations to Nvidia staff with those words for many years.[26]

Nvidia sold about a million RIVA 128 units within four months,[26] and used the revenue to fund development of its next generation of products.[35] In 1998, the release of the RIVA TNT helped solidify Nvidia's reputation as a leader in graphics technology.[38]

Public company

[edit]

Nvidia went public on January 22, 1999.[39][40][41] Investing in Nvidia after it had already failed to deliver on its contract turned out to be Irimajiri's best decision as Sega's president. After Irimajiri left Sega in 2000, Sega sold its Nvidia stock for $15 million.[37]

In late 1999, Nvidia released the GeForce 256 (NV10), its first product expressly marketed as a GPU, which was most notable for introducing onboard transformation and lighting (T&L) to consumer-level 3D hardware. Running at 120 MHz and featuring four-pixel pipelines, it implemented advanced video acceleration, motion compensation, and hardware sub-picture alpha blending. The GeForce outperformed existing products by a wide margin.

Due to the success of its products, Nvidia won the contract to develop the graphics hardware for Microsoft's Xbox game console, which earned Nvidia a $200 million advance. However, the project took many of its best engineers away from other projects. In the short term this did not matter, and the GeForce 2 GTS shipped in the summer of 2000. In December 2000, Nvidia reached an agreement to acquire the intellectual assets of its one-time rival 3dfx, a pioneer in consumer 3D graphics technology leading the field from the mid-1990s until 2000.[42][43] The acquisition process was finalized in April 2002.[44]

In 2001, Standard & Poor's selected Nvidia to replace the departing Enron in the S&P 500 stock index, meaning that index funds would need to hold Nvidia shares going forward.[45]

In July 2002, Nvidia acquired Exluna for an undisclosed sum. Exluna made software-rendering tools and the personnel were merged into the Cg project.[46] In August 2003, Nvidia acquired MediaQ for approximately US$70 million.[47] It launched GoForce the following year. On April 22, 2004, Nvidia acquired iReady, also a provider of high-performance TCP offload engines and iSCSI controllers.[48] In December 2004, it was announced that Nvidia would assist Sony with the design of the graphics processor (RSX) for the PlayStation 3 game console. On December 14, 2005, Nvidia acquired ULI Electronics, which at the time supplied third-party southbridge parts for chipsets to ATI, Nvidia's competitor.[49] In March 2006, Nvidia acquired Hybrid Graphics.[50] In December 2006, Nvidia, along with its main rival in the graphics industry AMD (which had acquired ATI), received subpoenas from the U.S. Department of Justice regarding possible antitrust violations in the graphics card industry.[51]

2007–2014

[edit]

Forbes named Nvidia its Company of the Year for 2007, citing the accomplishments it made during the said period as well as during the previous five years.[52] On January 5, 2007, Nvidia announced that it had completed the acquisition of PortalPlayer, Inc.[53] In February 2008, Nvidia acquired Ageia, developer of PhysX, a physics engine and physics processing unit. Nvidia announced that it planned to integrate the PhysX technology into its future GPU products.[54][55]

In July 2008, Nvidia took a write-down of approximately $200 million on its first-quarter revenue, after reporting that certain mobile chipsets and GPUs produced by the company had "abnormal failure rates" due to manufacturing defects. Nvidia, however, did not reveal the affected products. In September 2008, Nvidia became the subject of a class action lawsuit over the defects, claiming that the faulty GPUs had been incorporated into certain laptop models manufactured by Apple Inc., Dell, and HP. In September 2010, Nvidia reached a settlement, in which it would reimburse owners of the affected laptops for repairs or, in some cases, replacement.[56][57] On January 10, 2011, Nvidia signed a six-year, $1.5 billion cross-licensing agreement with Intel, ending all litigation between the two companies.[58]

In November 2011, after initially unveiling it at Mobile World Congress, Nvidia released its ARM-based system on a chip for mobile devices, Tegra 3. Nvidia claimed that the chip featured the first-ever quad-core mobile CPU.[59][60] In May 2011, it was announced that Nvidia had agreed to acquire Icera, a baseband chip making company in the UK, for $367 million.[61] In January 2013, Nvidia unveiled the Tegra 4, as well as the Nvidia Shield, an Android-based handheld game console powered by the new system on a chip.[62] On July 29, 2013, Nvidia announced that they acquired PGI from STMicroelectronics.[63]

In February 2013, Nvidia announced its plans to build a new headquarters in the form of two giant triangle-shaped buildings on the other side of San Tomas Expressway (to the west of its existing headquarters complex). The company selected triangles as its design theme. As Huang explained in a blog post, the triangle is "the fundamental building block of computer graphics".[64]

In 2014, Nvidia ported the Valve games Portal and Half Life 2 to its Nvidia Shield Tablet as Lightspeed Studio.[65][66] Since 2014, Nvidia has diversified its business focusing on three markets: gaming, automotive electronics, and mobile devices.[67]

That same year, Nvidia also prevailed in litigation brought by the trustee of 3dfx's bankruptcy estate to challenge its 2000 acquisition of 3dfx's intellectual assets. On November 6, 2014, in an unpublished memorandum order, the U.S. Court of Appeals for the Ninth Circuit affirmed the "district court's judgment affirming the bankruptcy court's determination that [Nvidia] did not pay less than fair market value for assets purchased from 3dfx shortly before 3dfx filed for bankruptcy".[68]

2016–2018

[edit]

On May 6, 2016, Nvidia unveiled the first GPUs of the GeForce 10 series, the GTX 1080 and 1070, based on the company's new Pascal microarchitecture. Nvidia claimed that both models outperformed its Maxwell-based Titan X model; the models incorporate GDDR5X and GDDR5 memory respectively, and use a 16 nm manufacturing process. The architecture also supports a new hardware feature known as simultaneous multi-projection (SMP), which is designed to improve the quality of multi-monitor and virtual reality (VR) rendering.[69][70][71] Laptops that include these GPUs and are sufficiently thin – as of late 2017, under 0.8 inches (20 mm) – have been designated as meeting Nvidia's "Max-Q" design standard.[72]

In July 2016, Nvidia agreed to a settlement for a false advertising lawsuit regarding its GTX 970 model, as the models were unable to use all of their advertised 4 GB of VRAM due to limitations brought by the design of its hardware.[73] In May 2017, Nvidia announced a partnership with Toyota which would use Nvidia's Drive PX-series artificial intelligence platform for its autonomous vehicles.[74] In July 2017, Nvidia and Chinese search giant Baidu announced a far-reaching AI partnership that includes cloud computing, autonomous driving, consumer devices, and Baidu's open-source AI framework PaddlePaddle. Baidu unveiled that Nvidia's Drive PX 2 AI will be the foundation of its autonomous-vehicle platform.[75]

Nvidia officially released the Titan V on December 7, 2017.[76][77]

Nvidia officially released the Nvidia Quadro GV100 on March 27, 2018.[78] Nvidia officially released the RTX 2080 GPUs on September 27, 2018. In 2018, Google announced that Nvidia's Tesla P4 graphic cards would be integrated into Google Cloud service's artificial intelligence.[79]

In May 2018, on the Nvidia user forum, a thread was started[80] asking the company to update users when they would release web drivers for its cards installed on legacy Mac Pro machines up to mid-2012 5,1 running the macOS Mojave operating system 10.14. Web drivers are required to enable graphics acceleration and multiple display monitor capabilities of the GPU. On its Mojave update info website, Apple stated that macOS Mojave would run on legacy machines with 'Metal compatible' graphics cards[81] and listed Metal compatible GPUs, including some manufactured by Nvidia.[82] However, this list did not include Metal compatible cards that currently work in macOS High Sierra using Nvidia-developed web drivers. In September, Nvidia responded, "Apple fully controls drivers for macOS. But if Apple allows, our engineers are ready and eager to help Apple deliver great drivers for macOS 10.14 (Mojave)."[83] In October, Nvidia followed this up with another public announcement, "Apple fully controls drivers for macOS. Unfortunately, Nvidia currently cannot release a driver unless it is approved by Apple,"[84] suggesting a possible rift between the two companies.[85] By January 2019, with still no sign of the enabling web drivers, Apple Insider weighed into the controversy with a claim that Apple management "doesn't want Nvidia support in macOS".[86] The following month, Apple Insider followed this up with another claim that Nvidia support was abandoned because of "relational issues in the past",[87] and that Apple was developing its own GPU technology.[88] Without Apple-approved Nvidia web drivers, Apple users are faced with replacing their Nvidia cards with a competing supported brand, such as AMD Radeon from the list recommended by Apple.[89]

2019 acquisition of Mellanox Technologies

[edit]
Nvidia Yokneam office (former Mellanox Technologies) in Yokneam Illit, Israel, in March 2023

On March 11, 2019, Nvidia announced a deal to buy Mellanox Technologies for $6.9 billion[90] to substantially expand its footprint in the high-performance computing market. In May 2019, Nvidia announced new RTX Studio laptops. The creators say that the new laptop is going to be seven times faster than a top-end MacBook Pro with a Core i9 and AMD's Radeon Pro Vega 20 graphics in apps like Maya and RedCine-X Pro.[91] In August 2019, Nvidia announced Minecraft RTX, an official Nvidia-developed patch for the game Minecraft adding real-time DXR ray tracing exclusively to the Windows 10 version of the game. The whole game is, in Nvidia's words, "refit" with path tracing, which dramatically affects the way light, reflections, and shadows work inside the engine.[92]

2020–2023

[edit]

In May 2020, Nvidia announced it was acquiring Cumulus Networks.[93] Post acquisition the company was absorbed into Nvidia's networking business unit, along with Mellanox.

In May 2020, Nvidia developed an open-source ventilator to address the shortage resulting from the global coronavirus pandemic.[94] On May 14, 2020, Nvidia officially announced their Ampere GPU microarchitecture and the Nvidia A100 GPU accelerator.[95][96] In July 2020, it was reported that Nvidia was in talks with SoftBank to buy Arm, a UK-based chip designer, for $32 billion.[97]

On September 1, 2020, Nvidia officially announced the GeForce 30 series based on the company's new Ampere microarchitecture.[98][99]

On September 13, 2020, Nvidia announced that they would buy Arm from SoftBank Group for $40 billion, subject to the usual scrutiny, with the latter retaining a 10% share of Nvidia.[100][101][102][103]

Nvidia GeForce RTX 2080 Ti, part of the RTX 20 series, which is the first generation of Nvidia RTX

In October 2020, Nvidia announced its plan to build the most powerful computer in Cambridge, England. The computer, called Cambridge-1, launched in July 2021 with a $100 million investment and will employ AI to support healthcare research.[104][105] According to Jensen Huang, "The Cambridge-1 supercomputer will serve as a hub of innovation for the UK, and further the groundbreaking work being done by the nation's researchers in critical healthcare and drug discovery."[106]

Also in October 2020, along with the release of the Nvidia RTX A6000, Nvidia announced it is retiring its workstation GPU brand Quadro, shifting its product name to Nvidia RTX for future products and the manufacturing to be Nvidia Ampere architecture-based.[107]

In August 2021, the proposed takeover of Arm was stalled after the UK's Competition and Markets Authority raised "significant competition concerns".[108] In October 2021, the European Commission opened a competition investigation into the takeover. The Commission stated that Nvidia's acquisition could restrict competitors' access to Arm's products and provide Nvidia with too much internal information on its competitors due to their deals with Arm. SoftBank (the parent company of Arm) and Nvidia announced in early February 2022 that they "had agreed not to move forward with the transaction 'because of significant regulatory challenges'".[109] The investigation was set to end on March 15, 2022.[110][111] That same month, Nvidia was reportedly compromised by a cyberattack.[112] This would have been the largest semiconductor acquisition in history.[17][18]

In March 2022, Nvidia's CEO Jensen Huang mentioned that they were open to having Intel manufacture their chips in the future.[113] This was the first time the company mentioned that they would work together with Intel's upcoming foundry services.

In April 2022, it was reported that Nvidia planned to open a new research center in Yerevan, Armenia.[114]

In May 2022, Nvidia opened Voyager, the second of the two giant buildings at its new headquarters complex to the west of the old one. Unlike its smaller and older sibling Endeavor, the triangle theming is used more "sparingly" in Voyager.[115][116]

In September 2022, Nvidia announced its next-generation automotive-grade chip, Drive Thor.[117][118]

In September 2022, Nvidia announced a collaboration with the Broad Institute of MIT and Harvard related to the entire suite of Nvidia's AI-powered healthcare software suite called Clara, that includes Parabricks and MONAI.[119]

Following U.S. Department of Commerce regulations which placed an embargo on exports to China of advanced microchips, which went into effect in October 2022, Nvidia saw its data center chip added to the export control list. The next month, the company unveiled a new advanced chip in China, called the A800 GPU, that met the export control rules.[120]

In September 2023, Getty Images announced that it was partnering with Nvidia to launch Generative AI by Getty Images, a new tool that lets people create images using Getty's library of licensed photos. Getty will use Nvidia's Edify model, which is available on Nvidia's generative AI model library Picasso.[121]

On September 26, 2023, Denny's CEO Kelli Valade joined Huang in East San Jose to celebrate the founding of Nvidia at Denny's on Berryessa Road, where a plaque was installed to mark the relevant corner booth as the birthplace of a $1 trillion company.[26][122] By then, Nvidia's H100 GPUs were in such demand that even other tech giants were beholden to how Nvidia allocated supply. Larry Ellison of Oracle Corporation said that month that during a dinner with Huang at Nobu in Palo Alto, he and Elon Musk of Tesla, Inc. and xAI "were begging" for H100s, "I guess is the best way to describe it. An hour of sushi and begging".[123]

In October 2023, it was reported that Nvidia had quietly begun designing ARM-based central processing units (CPUs) for Microsoft's Windows operating system with a target to start selling them in 2025.[124]

2024–2025

[edit]

In January 2024, Forbes reported that Nvidia has increased its lobbying presence in Washington, D.C. as American lawmakers consider proposals to regulate artificial intelligence. From 2023 to 2024, the company reportedly hired at least four government affairs with professional backgrounds at agencies including the United States Department of State and the Department of the Treasury. It was noted that the $350,000 spent by the company on lobbying in 2023 was small compared to a number of major tech companies in the artificial intelligence space.[125]

In January 2024, Raymond James Financial analysts estimated that Nvidia was selling the H100 GPU in the price range of $25,000 to $30,000 each, while on eBay, individual H100s cost over $40,000.[126] Several major technology companies were purchasing tens or hundreds of thousands of GPUs for their data centers to run generative artificial intelligence projects; simple arithmetic implied that they were committing to billions of dollars in capital expenditures.[126]

In February 2024, it was reported that Nvidia was the "hot employer" in Silicon Valley because it was offering interesting work and good pay at a time when other tech employers were downsizing. Half of Nvidia employees earned over $228,000 in 2023.[127] By then, Nvidia GPUs had become so valuable that they needed special security while in transit to data centers. Cisco chief information officer Fletcher Previn explained at a CIO summit: "Those GPUs arrive by armored car".[128]

On March 1, 2024, Nvidia became the third company in the history of the United States to close with a market capitalization in excess of $2 trillion.[45] Nvidia needed only 180 days to get to $2 trillion from $1 trillion, while the first two companies, Apple and Microsoft, each took over 500 days.[45] On March 18, Nvidia announced its new AI chip and microarchitecture Blackwell, named after mathematician David Blackwell.[129]

In April 2024, Reuters reported that China had allegedly acquired banned Nvidia chips and servers from Supermicro and Dell via tenders.[130]

In June 2024, the Federal Trade Commission (FTC) and the Justice Department (DOJ) began antitrust investigations into Nvidia, Microsoft and OpenAI, focusing on their influence in the AI industry. The FTC led the investigations into Microsoft and OpenAI, while the DOJ handled Nvidia. The probes centered on the companies' conduct rather than mergers. This development followed an open letter from OpenAI employees expressing concerns about the rapid AI advancements and lack of oversight.[131]

The company became the world's most valuable, surpassing Microsoft and Apple, on June 18, 2024, after its market capitalization exceeded $3.3 trillion.[132][133]

In June 2024, Trend Micro announced a partnership with Nvidia to develop AI-driven security tools, notably to protect the data centers where AI workloads are processed. This collaboration integrates Nvidia NIM and Nvidia Morpheus with Trend Vision One and its Sovereign and Private Cloud solutions to improve data privacy, real-time analysis, and rapid threat mitigation.[134]

In October 2024, Nvidia introduced a family of open-source multimodal large language models called NVLM 1.0, which features a flagship version with 72 billion parameters, designed to improve text-only performance after multimodal training.[135][136]

In November 2024, the company was added to the Dow Jones Industrial Average.[137][138]

In November 2024, Morgan Stanley reported that "the entire 2025 production" of all of Nvidia's Blackwell chips was "already sold out".[139]

Also in November 2024, the company bought 1.2 million shares of Nebius Group.[140]

Nvidia was ranked #3 on Forbes' "Best Places to Work" list in 2024.[141]

As of January 7, 2025, Nvidia's $3.66 trillion market cap was worth more than double of the combined value of AMD, ARM, Broadcom, and Intel.[142]

In January 2025, Nvidia saw the largest one-day loss in market capitalization for a U.S. company in history at $600 billion. This was due to DeepSeek, a Chinese AI startup that developed an advanced AI model at a lower cost and computing power.[143] DeepSeek's AI assistant, using the V3 model, surpassed ChatGPT as the highest-rated free app in the U.S. on Apple's App Store.[144][145]

On April 7, 2025, Nvidia released the Llama-3.1-Nemotron-Ultra-253B-v1 reasoning large language model, under the Nvidia Open Model License. It comes in three sizes: Nano, Super and Ultra.[146]

On May 28, 2025, Nvidia's second-quarter revenue forecast fell short of market estimates due to U.S. export restrictions impacting AI chip sales to China, yet the company's stock rose 5% as investors remained optimistic about long-term AI demand.[147]

In July 2025, it was announced that Nvidia had acquired CentL, a Canadian-based AI firm.[148]

On July 10, 2025, Nvidia closed for the first time with a market cap above $4 trillion, after its market cap briefly touched and then retreated from that number during the previous day.[149] Nvidia became the first company to reach a market cap of $4 trillion.[150] At that point, Nvidia was worth more than the combined value of all publicly traded companies in the United Kingdom.[149]

On July 29, 2025, Nvidia ordered 300,000 H20 AI chips from Taiwan Semiconductor Manufacturing Company (TSMC) due to strong demand from Chinese tech firms like Tencent and Alibaba.[151]

In August 2025, Nvidia and competitor Advanced Micro Devices agreed to pay 15% of the revenues from certain chip sales in China as part of an arrangement to obtain export licenses.[152] Nvidia will pay only for sales of the H20 chips.[153]

On September 17, 2025, Nvidia chief executive Jensen Huang said he was “disappointed” after the Cyberspace Administration of China (CAC) ordered companies including TikTok parent company ByteDance and Alibaba not to purchase the RTX Pro 6000D, a graphics chip made specifically for the Chinese market. China’s internet regulator banned the country’s largest technology companies from buying Nvidia’s artificial intelligence chips as part of efforts to strengthen the domestic industry and compete with the United States. The CAC instructed companies this week to end both testing and orders of the RTX Pro 6000D, which Nvidia had designed as a tailor-made product for China, according to three people with knowledge of the matter.[154][155]

On September 18th, 2025, Nvidia announced it would invest $5 billion in Intel, backing the struggling U.S. chipmaker just weeks after the White House arranged a deal for the federal government to take a major stake in the company. The investment will give Nvidia an immediate holding of about 4% in Intel once new shares are issued to finalize the agreement. Nvidia’s move provides Intel with fresh support following years of unsuccessful turnaround efforts and will allow Nvidia to offer its powerful GB300 data center servers based on Blackwell GPUs on Intel's X86 architecture.[156]

On September 22, 2025, Nvidia and OpenAI announced a partnership wherein Nvidia would invest $100 billion into OpenAI, and OpenAI would use Nvidia chips and systems in new data centers. OpenAI will build new AI data centers using Nvidia systems, amounting to at least 10 gigawatts system power, which is the equivalent of energy produced by more than four Hoover Dams. The deal is a circular arrangement where OpenAI will pay back Nvidia's investment through the purchase of Nvidia's chips, which is a model common in AI partnerships.[157] This "circularity" is estimated at $35 billion in new Nvidia chips bought by OpenAI, for every $10 billion Nvidia invests in OpenAI.[158]

A server farm dedicated to autonomous AI has been established through a collaboration between SDS Schönfeld, a data services firm owned by UC Schönfeld, and VAST Data, an Israeli company specializing in AI storage management that collaborates closely with Nvidia. Reports indicate that approximately $30 billion has been secured for the farm. This server farm is expected to feature "tens of petabytes of data infrastructure powered by VAST, along with thousands of Nvidia Blackwell GPUs and Nvidia network processors."[159]

Fabless manufacturing

[edit]

Nvidia uses external suppliers for all phases of manufacturing, including wafer fabrication, assembly, testing, and packaging. Nvidia thus avoids most of the investment and production costs and risks associated with chip manufacturing, although it does sometimes directly procure some components and materials used in the production of its products (e.g., memory and substrates). Nvidia focuses its own resources on product design, quality assurance, marketing, and customer support.[160][161]

Corporate affairs

[edit]
Sales by business unit (2023)[162]
Business unit Sales (billion $) Share
Compute & networking 47.4 77.8%
Graphics 13.5 22.2%
Sales by region (2023)[162]
Region Sales (billion $) Share
United States 27.0 44.3%
Taiwan 13.4 22.0%
China 10.3 16.9%
Other countries 10.2 16.8%

Leadership

[edit]

Nvidia's key management as of March 2024 consists of:[163]

  • Jensen Huang, founder, president and chief executive officer
  • Chris Malachowsky, founder and Nvidia fellow
  • Colette Kress, executive vice president and chief financial officer
  • Jay Puri, executive vice president of worldwide field operations
  • Debora Shoquist, executive vice president of operations
  • Tim Teter, executive vice president, general counsel and secretary

Board of directors

[edit]

As of November 2024, the company's board consisted of the following directors:[164]

Finances

[edit]
Nvidia stock price (1999–2023)
10-year financials (2016–2025)
Year Revenue
(mn. US$)
Net income
(mn. US$)
Employees
2016 5,010 614 9,227
2017 6,910 1,666 10,299
2018 9,714 3,047 11,528
2019 11,716 4,141 13,277
2020 10,918 2,796 13,775
2021 16,675 4,332 18,975
2022 26,914 9,752 22,473
2023 26,974 4,368 26,000
2024 60,922 29,760 29,600
2025 130,497 72,880 36,000

For the fiscal year 2020, Nvidia reported earnings of US$2.796 billion, with an annual revenue of US$10.918 billion, a decline of 6.8% over the previous fiscal cycle. Nvidia's shares traded at over $531 per share, and its market capitalization was valued at over US$328.7 billion in January 2021.[165][166]

For the Q2 of 2020, Nvidia reported sales of $3.87 billion, which was a 50% rise from the same period in 2019. The surge in sales and people's higher demand for computer technology. According to the financial chief of the company, Colette Kress, the effects of the pandemic will "likely reflect this evolution in enterprise workforce trends with a greater focus on technologies, such as Nvidia laptops and virtual workstations, that enable remote work and virtual collaboration."[167] In May 2023, Nvidia crossed $1 trillion in market valuation during trading hours,[168] and grew to $1.2 trillion by the following November.[169]

Ownership

[edit]

The 10 largest shareholders of Nvidia in early 2024 were:[162]

GPU Technology Conference

[edit]

Nvidia's GPU Technology Conference (GTC) is a series of technical conferences held around the world.[170] It originated in 2009 in San Jose, California, with an initial focus on the potential for solving computing challenges through GPUs.[171] In recent years,[when?] the conference's focus has shifted to various applications of artificial intelligence and deep learning; including self-driving cars, healthcare, high-performance computing, and Nvidia Deep Learning Institute (DLI) training.[172] GTC 2018 attracted over 8400 attendees.[170] GTC 2020 was converted to a digital event and drew roughly 59,000 registrants.[173] After several years of remote-only events, GTC in March 2024 returned to an in-person format in San Jose, California.[174]

At GTC 2025, Nvidia unveiled its next-generation AI hardware, the Blackwell Ultra and Vera Rubin chips, signaling a leap toward agentic AI and reasoning-capable computing. Huang projected that AI-driven infrastructure would drive Nvidia's data center revenue to $1 trillion by 2028. The announcement also introduced Isaac GR00T N1 (humanoid robotics model), Cosmos (synthetic training data AI), and the Newton physics engine, developed in collaboration with DeepMind and Disney Research.[175]

Product families

[edit]
A Shield Tablet with its accompanying input pen (left) and gamepad

Nvidia's product families include graphics processing units, wireless communication devices, and automotive hardware and software, such as:

  • GeForce, consumer-oriented graphics processing products
  • RTX, professional visual computing graphics processing products (replacing GTX and Quadro)
  • NVS, a multi-display business graphics processor
  • Tegra, a system on a chip series for mobile devices
  • Tesla, line of dedicated general-purpose GPUs for high-end image generation applications in professional and scientific fields
  • nForce, a motherboard chipset created by Nvidia for Intel (Celeron, Pentium and Core 2) and AMD (Athlon and Duron) microprocessors
  • GRID, a set of hardware and services by Nvidia for graphics virtualization
  • Shield, a range of gaming hardware including the Shield Portable, Shield Tablet and Shield TV
  • Drive, a range of hardware and software products for designers and manufacturers of autonomous vehicles. The Drive PX-series is a high-performance computer platform aimed at autonomous driving through deep learning,[176] while Driveworks is an operating system for driverless cars.[177]
  • BlueField, a range of data processing units, initially inherited from their acquisition of Mellanox Technologies[178][179]
  • Datacenter/server class CPU, codenamed Grace, released in 2023[180][181]
  • DGX, an enterprise platform designed for deep learning applications
  • Maxine, a platform providing developers a suite of AI-based conferencing software[182]

Open-source software support

[edit]

Until September 23, 2013, Nvidia had not published any documentation for its advanced hardware,[183] meaning that programmers could not write free and open-source device drivers for its products without resorting to reverse engineering.

Instead, Nvidia provides its own binary GeForce graphics drivers for X.Org and an open-source library that interfaces with the Linux, FreeBSD or Solaris kernels and the proprietary graphics software. Nvidia also provided but stopped supporting an obfuscated open-source driver that only supports two-dimensional hardware acceleration and ships with the X.Org distribution.[184]

The proprietary nature of Nvidia's drivers has generated dissatisfaction within free-software communities. In a 2012 talk, Linus Torvalds gave a middle-finger gesture and criticized Nvidia’s stance toward Linux.[185][186] Some Linux and BSD users insist on using only open-source drivers and regard Nvidia's insistence on providing nothing more than a binary-only driver as inadequate, given that competing manufacturers such as Intel offer support and documentation for open-source developers, and others like AMD release partial documentation and provide some active development.[187][188]

Nvidia only provides x86/x64 and ARMv7-A versions of their proprietary driver; as a result, features like CUDA are unavailable on other platforms.[189] Some users claim that Nvidia's Linux drivers impose artificial restrictions, like limiting the number of monitors that can be used at the same time, but the company has not commented on these accusations.[190]

In 2014, with its Maxwell GPUs, Nvidia started to require firmware by them to unlock all features of its graphics cards.[191][192][193]

On May 12, 2022, Nvidia announced that they are opensourcing their GPU kernel modules.[194][195][196] Support for Nvidia's firmware was implemented in nouveau in 2023, which allows proper power management and GPU reclocking for Turing and newer graphics card generations.[197][198]

In 21 July 2025, Nvidia announce to extend CUDA support to RISC-V.[199][200][201]

List of Nvidia open-source projects

[edit]

Deep learning

[edit]

Nvidia GPUs are used in deep learning, and accelerated analytics due to Nvidia's CUDA software platform and API which allows programmers to utilize the higher number of cores present in GPUs to parallelize BLAS operations which are extensively used in machine learning algorithms.[11] They were included in many Tesla, Inc. vehicles before Musk announced at Tesla Autonomy Day in 2019 that the company developed its own SoC and full self-driving computer now and would stop using Nvidia hardware for their vehicles.[203][204] These GPUs are used by researchers, laboratories, tech companies and enterprise companies.[205] In 2009, Nvidia was involved in what was called the "big bang" of deep learning, "as deep-learning neural networks were combined with Nvidia graphics processing units (GPUs)".[206] That year, the Google Brain team used Nvidia GPUs to create deep neural networks capable of machine learning, where Andrew Ng determined that GPUs could increase the speed of deep learning systems by about 100 times.[207]

DGX

[edit]

DGX is a line of supercomputers by Nvidia.

In April 2016, Nvidia produced the DGX-1 based on an 8 GPU cluster, to improve the ability of users to use deep learning by combining GPUs with integrated deep learning software.[208] Nvidia gifted its first DGX-1 to OpenAI in August 2016 to help it train larger and more complex AI models with the capability of reducing processing time from six days to two hours.[209][210] It also developed Nvidia Tesla K80 and P100 GPU-based virtual machines, which are available through Google Cloud, which Google installed in November 2016.[211] Microsoft added GPU servers in a preview offering of its N series based on Nvidia's Tesla K80s, each containing 4992 processing cores. Later that year, AWS's P2 instance was produced using up to 16 Nvidia Tesla K80 GPUs. That month Nvidia also partnered with IBM to create a software kit that boosts the AI capabilities of Watson,[212] called IBM PowerAI.[213][214] Nvidia also offers its own Nvidia Deep Learning software development kit.[215] In 2017, the GPUs were also brought online at the Riken Center for Advanced Intelligence Project for Fujitsu.[216] The company's deep learning technology led to a boost in its 2017 earnings.[217]

In 2018, Nvidia researchers demonstrated imitation-learning techniques for industrial robots. They have created a system that, after a short revision and testing, can already be used to control the universal robots of the next generation. In addition to GPU manufacturing, Nvidia provides parallel processing capabilities to researchers and scientists that allow them to efficiently run high-performance applications.[218]

Robotics

[edit]

In 2020, Nvidia unveiled "Omniverse", a virtual environment designed for engineers.[219] Nvidia also open-sourced Isaac Sim, which makes use of this Omniverse to train robots through simulations that mimic the physics of the robots and the real world.[220][221]

In 2024, Huang oriented Nvidia's focus towards humanoid robots and self-driving cars, which he expects to gain widespread adoption.[222][223]

In 2025, Nvidia announced Isaac GR00T N1, an open-source foundation model "designed to expedite the development and capabilities of humanoid robots". Neura Robotics, 1X Technologies and Vention are among the first companies to use the model.[224][225][226]

Inception Program

[edit]

Nvidia's Inception Program was created to support startups making exceptional advances in the fields of artificial intelligence and data science. Award winners are announced at Nvidia's GTC Conference. In May 2017, the program had 1,300 companies.[227] As of March 2018, there were 2,800 startups in the Inception Program.[228] As of August 2021, the program has over 8,500 members in 90 countries, with cumulative funding of US$60 billion.[229]

Controversies

[edit]

Maxwell advertising dispute

[edit]

GTX 970 hardware specifications

[edit]

Issues with the GeForce GTX 970's specifications were first brought up by users when they found out that the cards, while featuring 4 GB of memory, rarely accessed memory over the 3.5 GB boundary. Further testing and investigation eventually led to Nvidia issuing a statement that the card's initially announced specifications had been altered without notice before the card was made commercially available, and that the card took a performance hit once memory over the 3.5 GB limit were put into use.[230][231][232]

The card's back-end hardware specifications, initially announced as being identical to those of the GeForce GTX 980, differed in the amount of L2 cache (1.75 MB versus 2 MB in the GeForce GTX 980) and the number of ROPs (56 versus 64 in the 980). Additionally, it was revealed that the card was designed to access its memory as a 3.5 GB section, plus a 0.5 GB one, access to the latter being 7 times slower than the first one.[233] The company then went on to promise a specific driver modification to alleviate the performance issues produced by the cutbacks suffered by the card.[234] However, Nvidia later clarified that the promise had been a miscommunication and there would be no specific driver update for the GTX 970.[235] Nvidia claimed that it would assist customers who wanted refunds in obtaining them.[236] On February 26, 2015, Nvidia CEO Jensen Huang went on record in Nvidia's official blog to apologize for the incident.[237] In February 2015 a class-action lawsuit alleging false advertising was filed against Nvidia and Gigabyte Technology in the U.S. District Court for Northern California.[238][239]

Nvidia revealed that it is able to disable individual units, each containing 256 KB of L2 cache and 8 ROPs, without disabling whole memory controllers.[240] This comes at the cost of dividing the memory bus into high speed and low speed segments that cannot be accessed at the same time unless one segment is reading while the other segment is writing because the L2/ROP unit managing both of the GDDR5 controllers shares the read return channel and the write data bus between the two GDDR5 controllers and itself.[240] This is used in the GeForce GTX 970, which therefore can be described as having 3.5 GB in its high speed segment on a 224-bit bus and 0.5 GB in a low speed segment on a 32-bit bus.[240]

On July 27, 2016, Nvidia agreed to a preliminary settlement of the U.S. class action lawsuit,[238] offering a $30 refund on GTX 970 purchases. The agreed upon refund represents the portion of the cost of the storage and performance capabilities the consumers assumed they were obtaining when they purchased the card.[241]

GeForce Partner Program

[edit]

The Nvidia GeForce Partner Program was a marketing program designed to provide partnering companies with benefits such as public relations support, video game bundling, and marketing development funds.[242] The program proved to be controversial, with complaints about it possibly being an anti-competitive practice.[243]

First announced in a blog post on March 1, 2018,[244] it was canceled on May 4, 2018.[245]

Hardware Unboxed

[edit]

On December 10, 2020, Nvidia told YouTube tech reviewer Steven Walton of Hardware Unboxed that it would no longer supply him with GeForce Founders Edition graphics card review units.[246][247] In a Twitter message, Hardware Unboxed said, "Nvidia have officially decided to ban us from receiving GeForce Founders Edition GPU review samples. Their reasoning is that we are focusing on rasterization instead of ray tracing. They have said they will revisit this 'should your editorial direction change.'"[248]

In emails that were disclosed by Walton from Nvidia Senior PR Manager Bryan Del Rizzo, Nvidia had said:

...your GPU reviews and recommendations have continued to focus singularly on rasterization performance, and you have largely discounted all of the other technologies we offer gamers. It is very clear from your community commentary that you do not see things the same way that we, gamers, and the rest of the industry do.[249]

TechSpot, partner site of Hardware Unboxed, said, "this and other related incidents raise serious questions around journalistic independence and what they are expecting of reviewers when they are sent products for an unbiased opinion."[249]

A number of technology reviewers came out strongly against Nvidia's move.[250][251] Linus Sebastian, of Linus Tech Tips, titled the episode of his weekly WAN Show, "NVIDIA might ACTUALLY be EVIL..."[252] and was highly critical of the company's move to dictate specific outcomes of technology reviews.[253] The review site Gamers Nexus said it was, "Nvidia's latest decision to shoot both its feet: They've now made it so that any reviewers covering RT will become subject to scrutiny from untrusting viewers who will suspect subversion by the company. Shortsighted self-own from NVIDIA."[254]

Two days later, Nvidia reversed their stance.[255][256] Hardware Unboxed sent out a Twitter message, "I just received an email from Nvidia apologizing for the previous email & they've now walked everything back."[257][250] On December 14, Hardware Unboxed released a video explaining the controversy from their viewpoint.[258] Via Twitter, they also shared a second apology sent by Nvidia's Del Rizzo that said "to withhold samples because I didn't agree with your commentary is simply inexcusable and crossed the line."[259][260]

Improper disclosures about cryptomining

[edit]

In 2018, Nvidia's chips became popular for cryptomining, the process of obtaining crypto rewards in exchange for verifying transactions on distributed ledgers, the U.S. Securities and Exchange Commission (SEC) said. However, the company failed to disclose that it was a "significant element" of its revenue growth from sales of chips designed for gaming, the SEC further added in a statement and charging order. Those omissions misled investors and analysts who were interested in understanding the impact of cryptomining on Nvidia's business, the SEC emphasized. Nvidia, which did not admit or deny the findings, has agreed to pay $5.5 million to settle civil charges, according to a statement made by the SEC in May 2022.[261]

French Competition Authority Investigation

[edit]

On September 26, 2023, Nvidia's French offices were searched by the French Competition Authority. The raid, authorized by a judge, was part of an investigation into suspected anti-competitive practices in the graphics card sector. Nvidia has not publicly commented on the incident.[262]

AI regulation dispute with Anthropic

[edit]

In July 2025, a public dispute emerged between Nvidia CEO Jensen Huang and Anthropic CEO Dario Amodei over AI regulation and industry practices. The conflict escalated when Amodei vehemently denied Huang's allegations that he sought to control the AI industry through safety concerns, calling Huang's claims an "outrageous lie."[263] The dispute centered on differing philosophies regarding AI development, with Amodei advocating for stronger regulatory oversight and "responsible scaling policies," while Huang promoted open-source development and criticized what Nvidia characterized as "regulatory capture."[263] Nvidia responded by stating that "lobbying for regulatory capture against open source will only stifle innovation, make AI less safe and secure, and less democratic."[263] The controversy highlighted broader tensions within the AI industry between companies favoring rapid development and those emphasizing safety measures and regulation.[263]

Proposed Shanghai facility

[edit]

In May 2025, U.S. senators Jim Banks and Elizabeth Warren criticized a proposed Nvidia facility in Shanghai, saying that it "raises significant national security and economic security issues that warrant serious review."[264]

H20 production halt (2025)

[edit]

In August 2025, Nvidia ordered suppliers to halt production of its H20 AI chip following Chinese government directives warning domestic companies against purchasing the processor due to security concerns.[265][266] The company directed suppliers including Taiwan Semiconductor Manufacturing Company, Samsung Electronics, and Amkor Technology to suspend work on the China-focused processor.[267]

The H20 was developed in late 2023 specifically for the Chinese market to comply with U.S. export restrictions, featuring 96GB of HBM3 memory and 4.0 TB/s memory bandwidth—higher than the H100—but with significantly reduced computational power at 296 TFLOPs compared to the H100's 1979 TFLOPs.[268][269] Despite lower raw performance, the H20 demonstrated over 20% faster performance than the H100 in large language model inference tasks due to architectural optimizations.[268][269]

Prior to the production halt, Nvidia had placed substantial orders for the H20, including 300,000 units from TSMC in July 2025, driven by strong demand from Chinese technology companies.[270] CEO Jensen Huang denied allegations that the H20 contained security backdoors, stating the chips were designed solely for commercial use.[271] The production suspension occurred as Nvidia was developing the B30A, a new chip based on its Blackwell architecture intended to succeed the H20 in the Chinese market.[272]

See also

[edit]

Notes

[edit]

References

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
NVIDIA Corporation is an American multinational technology company founded in April 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem, and headquartered in Santa Clara, California. The company is renowned for inventing the graphics processing unit (GPU) in 1999 with the release of the GeForce 256, which revolutionized computer graphics and laid the foundation for advancements in gaming, professional visualization, and parallel computing. Over the decades, NVIDIA has expanded beyond graphics into leadership positions in artificial intelligence (AI) hardware, particularly through its CUDA software platform and Tensor Core technology, enabling breakthroughs in deep learning and machine learning applications. It dominates the gaming graphics market with its GeForce series of GPUs and provides data center solutions, including high-performance computing systems for AI training, scientific simulations, and cloud infrastructure, powering major tech giants and research institutions worldwide. These innovations, coupled with surging demand for AI technologies in the 2020s, have propelled NVIDIA to become one of the world's most valuable companies by market capitalization, reaching approximately $4.65 trillion as of late January 2026.

History

Founding and Early Years

NVIDIA Corporation was founded on April 5, 1993, by Jensen Huang, Chris Malachowsky, and Curtis Priem, three engineers who pooled their resources to start the company with an initial investment of $40,000. The founders aimed to develop high-performance computing solutions, particularly focusing on chips for accelerating 3D graphics in gaming and multimedia applications, at a time when the market was dominated by established players like Silicon Graphics Inc. (SGI). The company's early operations were based in a modest headquarters in Sunnyvale, California, reflecting its startup status and limited resources. To fuel growth, NVIDIA secured its first major investment from Sequoia Capital in 1993, which provided crucial funding amid a competitive landscape of numerous graphics chip startups. Despite these efforts, the company faced significant challenges in its initial years, including intense competition and the need to navigate a rapidly evolving technology sector. In 1995, NVIDIA launched its first product, the NV1 chip, a multimedia accelerator designed to handle graphics, sound, and game controls in one package. However, the NV1 suffered from compatibility issues with Microsoft's newly released DirectX standard, which rendered it unsuitable for many PC games and led to poor market adoption. These setbacks plunged the company into financial struggles, bringing it to the brink of bankruptcy as sales failed to materialize and resources dwindled. This period of near-failure tested the founders' resolve, ultimately paving the way for a strategic pivot toward more compatible consumer graphics solutions in the late 1990s.

Expansion into Graphics and Gaming

NVIDIA's breakthrough in the graphics market came with the release of the RIVA 128 in April 1997, which became the company's first major commercial success by supporting high-resolution 2D and 3D graphics and selling over a million units in its first four months. This chip, optimized for triangle-based rendering, doubled the performance of competitors at a competitive price, establishing NVIDIA as a leader in graphics processing units (GPUs). The RIVA 128's success helped the company recover from earlier setbacks and set the stage for its dominance in consumer graphics. Building on this momentum, NVIDIA launched the GeForce series in 1999 with the GeForce 256, the world's first GPU to integrate hardware transform and lighting (T&L) capabilities directly on the chip, offloading these tasks from the CPU to improve 3D rendering efficiency in gaming applications. This innovation marked a pivotal advancement in graphics hardware, enabling more complex visuals and higher frame rates in PC games. The GeForce line quickly gained traction through strong support for Microsoft's DirectX API, as NVIDIA collaborated closely with Microsoft to ensure compatibility and optimization for Direct3D, which became the standard for 3D graphics in Windows environments during the late 1990s and early 2000s. A key milestone in NVIDIA's expansion occurred in December 2000 when it acquired the assets of its rival 3dfx Interactive for $70 million in cash and 1 million shares of stock (valued at approximately $107 million at the time), gaining valuable intellectual property and engineering talent that bolstered its position in the graphics chip market. This acquisition eliminated a major competitor and allowed NVIDIA to integrate 3dfx's technologies into its own products, further solidifying its lead. By the early 2000s, NVIDIA had achieved dominance in the PC gaming market, capturing a significant share through superior performance and innovations that powered popular titles and set industry benchmarks for visual quality.

Entry into AI and Data Centers

NVIDIA's entry into artificial intelligence (AI) and data centers began with the launch of its Tesla GPU line in 2007, specifically designed for high-performance computing (HPC) applications beyond traditional graphics. The Tesla architecture targeted stream processing and general-purpose GPU (GPGPU) workloads, enabling supercomputing power in workstations and servers by leveraging parallel processing capabilities originally developed for gaming. This initiative marked an early pivot from consumer graphics to enterprise computing, with products like the Tesla C1060 providing dedicated HPC acceleration. Post-2012, NVIDIA's focus evolved toward data centers as breakthroughs in deep learning highlighted the potential of GPUs for AI training. The 2012 AlexNet model, trained on NVIDIA GPUs, demonstrated superior performance in image recognition tasks, sparking widespread adoption of GPU-accelerated computing in AI research and propelling NVIDIA's shift to data center solutions. This evolution built on core GPU technology foundations to address the growing demands of scalable AI workloads in enterprise environments. By integrating Tesla GPUs into server architectures, NVIDIA positioned itself as a key enabler for data-intensive applications in cloud and HPC sectors. A pivotal advancement came in 2014 with the development of NVLink, a high-speed interconnect technology that facilitated efficient multi- scaling by providing up to 160 GB/s bidirectional bandwidth per GPU, far surpassing traditional PCIe interfaces. NVLink enabled seamless data sharing and reduced latency in GPU clusters, making it essential for large-scale AI and HPC simulations that required coordinated processing across multiple accelerators. This technology laid the groundwork for NVIDIA's data center dominance by supporting flexible server designs with enhanced performance for parallel computing tasks. In 2016, NVIDIA introduced the DGX-1 system, the world's first purpose-built deep learning supercomputer, integrating eight Tesla P100 GPUs with NVLink for accelerated AI model training. The DGX platform came preloaded with optimized software stacks, including CUDA and deep learning frameworks, to streamline AI development for enterprises. This launch underscored NVIDIA's commitment to turnkey AI solutions, with early deliveries to partners like OpenAI highlighting its role in pioneering AI infrastructure. Building on this momentum, NVIDIA released the A100 Tensor Core GPU in 2020, based on the Ampere architecture and optimized for AI training and inference with features like multi-instance GPU (MIG) for workload partitioning and up to 80 GB of HBM2e memory. The A100 delivered significant speedups over predecessors, such as up to 2x performance in sparse AI models, making it a cornerstone for large-scale data center deployments in hyperscale environments. These advancements were supported by strategic partnerships, including collaborations with OpenAI for deploying DGX systems and with Google to integrate NVIDIA hardware into AI research and cloud services. Key milestones included CEO Jensen Huang's keynote at the 2017 GPU Technology Conference (GTC), where he proclaimed the arrival of the AI era and outlined NVIDIA's vision for supercomputing to power intelligent machines across industries. Huang emphasized AI's transformative potential, positioning NVIDIA's GPUs as essential for the next wave of computing innovation. By 2020, this strategic focus yielded a revenue surge in the data center segment, which exceeded gaming revenue for the first time, with data center sales reaching $1.75 billion compared to $1.65 billion from gaming in the second quarter. This shift reflected the growing enterprise demand for AI hardware, solidifying NVIDIA's leadership in data centers.

Products and Technologies

Graphics Processing Units

NVIDIA invented the graphics processing unit (GPU) in 1999 with the release of the GeForce 256, marketed as the world's first GPU, which integrated 3D graphics rendering functions onto a single chip. Unlike central processing units (CPUs), which are optimized for sequential processing with complex instruction handling and branching, GPUs excel in parallel processing by executing thousands of simpler tasks simultaneously across numerous cores, enabling efficient handling of graphics workloads and other compute-intensive applications. NVIDIA's GPU architectures have evolved through several generations, each introducing innovations in performance, efficiency, and functionality. The Fermi architecture, launched in 2010, marked a significant advancement in general-purpose computing on GPUs (GPGPU) with improved double-precision support and error-correcting code memory. This was followed by the Kepler architecture in 2012, which enhanced power efficiency and introduced dynamic parallelism for better workload management. The Pascal architecture, released in 2016, brought high-bandwidth memory (HBM2) integration and further improvements in floating-point operations per second (FLOPS), achieving 10.6 teraFLOPS in single-precision (FP32) and over 21 teraFLOPS in half-precision (FP16) for data center GPUs like the P100. Building on this, the Ampere architecture in 2020 incorporated multi-instance GPU capabilities and tensor cores for mixed-precision computing, with the A100 GPU featuring 54 billion transistors on an 826 mm² die. The Hopper architecture, introduced in 2022, advanced transformer engine technology and achieved notable power efficiency, delivering up to 70.1 gigaFLOPS per watt in industry benchmarks for certain workloads. A pivotal innovation occurred earlier with the Tesla architecture in 2006, which introduced a unified shader architecture that combined vertex, pixel, and other processing units into a single, programmable pipeline, enabling more flexible graphics and compute operations across 128 processing elements. In 2018, the RTX series debuted dedicated ray tracing hardware, including RT cores, to accelerate real-time ray tracing for photorealistic rendering by simulating light paths more efficiently than software-based methods. These hardware advancements are supported by NVIDIA's software ecosystems, such as CUDA, which facilitate broader GPU utilization.

AI and Machine Learning Hardware

NVIDIA introduced Tensor Cores with its Volta GPU architecture in 2017, marking a significant advancement in hardware optimized for mixed-precision AI computations. These specialized cores are designed to accelerate matrix multiply-accumulate operations essential for deep learning workloads, providing up to 8x higher throughput compared to single-precision math pipelines in previous generations. The performance of Tensor Cores can be quantified using the formula for tensor floating-point operations per second (TFLOPS):
TFLOPS=cores×clock speed×operations per cycle\text{TFLOPS} = \text{cores} \times \text{clock speed} \times \text{operations per cycle}
This innovation enabled more efficient training and inference for neural networks by supporting formats like FP16 and INT8, reducing computational overhead while maintaining accuracy.
Building on this foundation, NVIDIA has developed subsequent AI hardware products tailored for demanding applications such as large language models (LLMs). The Blackwell architecture, announced in 2024, includes the B100 GPU, which delivers substantial improvements in inference performance for models like GPT-3 with 175 billion parameters, offering up to 25x better cost and energy efficiency over prior generations. These accelerators feature enhanced Tensor Cores with support for even lower precisions, enabling scalable AI training and deployment in data centers. NVIDIA's AI hardware has achieved dominance in the training segment, controlling approximately 80% of the AI accelerator market and powering a majority of the world's top supercomputers by 2023, with architectures like the A100 becoming prevalent in high-performance computing environments. This leadership is bolstered by seamless integration with AI frameworks through tools like TensorRT, which optimizes models from platforms such as TensorFlow and PyTorch for high-performance inference on NVIDIA GPUs, delivering up to 6x faster execution. In terms of collaborations, NVIDIA has worked closely with companies like Tesla and Meta to supply AI chips for their custom applications, including prioritizing GPU shipments for Tesla's AI initiatives and supporting Meta's large-scale training efforts, though Tesla has also pursued its own chip designs to complement NVIDIA hardware.

Software and Platforms

NVIDIA's software ecosystem is centered around tools that enable developers to leverage its hardware for parallel computing, AI, and simulation tasks. A cornerstone of this ecosystem is the Compute Unified Device Architecture (CUDA), a parallel computing platform and programming model introduced by NVIDIA in November 2006. CUDA allows for general-purpose GPU programming, transforming graphics processors into versatile computing engines beyond traditional rendering. Key features include its kernel execution model, where developers write kernels—functions executed in parallel across thousands of GPU threads—to perform compute-intensive operations efficiently. By 2023, CUDA had attracted over four million developers worldwide, underscoring its widespread adoption in fields like AI and scientific computing. Complementing CUDA are specialized libraries and platforms tailored for AI and edge applications. The CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library providing highly tuned primitives for deep neural networks, accelerating operations such as convolutions and activations essential for machine learning workflows. NVIDIA also offers the JetPack SDK, a comprehensive software stack for building AI-powered edge applications on Jetson devices, including tools for low-latency inference and scalable deployment in robotics and IoT scenarios. For enterprise AI systems, DGX OS serves as a customized Linux distribution optimized for running AI, machine learning, and analytics workloads on DGX SuperPOD platforms, ensuring stability and performance in data center environments. NVIDIA's platforms extend into collaborative and simulation domains with Omniverse, launched in 2020 as a real-time 3D simulation and collaboration platform built on Universal Scene Description (USD). Omniverse enables remote teams to iterate on complex 3D designs, such as architectural models or animations, in a shared virtual space powered by RTX GPUs. Additionally, CUDA integrates seamlessly with popular machine learning frameworks like PyTorch and TensorFlow through official NVIDIA containers and releases that support GPU acceleration. These integrations allow developers to offload computations to NVIDIA GPUs with minimal configuration, enhancing training and inference speeds for deep learning models.

Corporate Structure and Operations

Leadership and Governance

NVIDIA's leadership is headed by co-founder Jensen Huang, who has served as president and chief executive officer since the company's inception in 1993. Huang's vision has centered on advancing parallel computing through innovations like the graphics processing unit (GPU), which NVIDIA invented in 1999 to enable accelerated computing for graphics, AI, and data centers. Key executives under Huang include Colette Kress, who has been executive vice president and chief financial officer since 2013, overseeing financial strategy and investor relations. The company's board of directors consists of 13 members as of November 2024, comprising a mix of independent directors and company insiders to ensure balanced oversight. Notable independent members include Mark Stevens, a venture capitalist and longtime board member, who contributes expertise in technology investments and governance. The board maintains strong governance practices, including holding annual shareholder meetings to discuss corporate matters and elect directors, as demonstrated by the 2023 annual meeting. A significant aspect of NVIDIA's governance involves executive compensation structures tied to performance metrics, particularly for CEO Huang, whose package includes equity awards linked to company achievements in revenue growth and stock performance. In fiscal year 2025, Huang's total compensation reached approximately $49.9 million, comprising a base salary increase to $1.5 million and substantial equity grants aligned with long-term incentives. Succession planning has been a topic of discussion, with reports in 2022 highlighting the need for formal strategies amid Huang's long tenure, though NVIDIA has emphasized continuity in leadership to support global operations.

Global Presence and Facilities

NVIDIA Corporation is headquartered at 2788 San Tomas Expressway in Santa Clara, California, which serves as the central hub for its operations and executive leadership. The company maintains over 50 offices worldwide, including sales and regional offices across the Americas, Asia, and Europe to support its international customer base and distribution networks. Key facilities extend to R&D and operational centers in Israel, bolstered by the 2020 acquisition of Mellanox Technologies, which significantly expanded NVIDIA's footprint there. Additionally, NVIDIA partners with Taiwan Semiconductor Manufacturing Company (TSMC) in Taiwan for the production of its graphics processing units and other semiconductors, leveraging TSMC's advanced fabrication capabilities. The company also operates facilities in China to support regional sales and development activities. As of the end of fiscal year 2023, NVIDIA employed 26,196 people across 35 countries, reflecting its broad global operational scale. This workforce is distributed with a significant presence in Asia, including Taiwan and China, as well as sales offices throughout Europe to facilitate market expansion and customer support in those regions. The Mellanox acquisition in April 2020, valued at $7 billion, enhanced NVIDIA's operations in Israel by integrating Mellanox's expertise in high-performance networking and has since increased the local employee count to approximately 5,000, making Israel a key hub for data center technologies. However, NVIDIA's global supply chain faced vulnerabilities during the 2022 chip shortages, which exposed dependencies on international manufacturing partners like TSMC and led to production delays amid surging demand for GPUs. In fiscal year 2023, NVIDIA's revenue showed strong regional diversification, with China accounting for 21% of total revenue, underscoring the importance of the Asia-Pacific market, which broadly contributed over 50% of the company's overall sales through key locations like Taiwan and Singapore. This geographic revenue distribution highlights NVIDIA's reliance on Asian manufacturing and sales channels while maintaining balanced growth across global regions.

Research and Development

NVIDIA Corporation invests significantly in research and development (R&D) to maintain its technological edge, with annual R&D expenses reaching $7.339 billion in fiscal year 2023, marking a 39.31% increase from the previous year. This spending underscores the company's commitment to innovation in core areas such as graphics processing and artificial intelligence. NVIDIA Research, established in 2006 and led by Bill Dally since 2009, serves as the company's primary hub for advanced technological exploration, employing more than 400 researchers who contribute to projects across 26 disciplines. The initiative focuses on emerging fields including quantum computing, where NVIDIA integrates quantum processing units (QPUs) with AI supercomputers to accelerate scientific discovery, and robotics, supporting the development of AI models for physical systems through tools like NVIDIA Cosmos world foundation models. The company's R&D strategy emphasizes rapid iteration, exemplified by its shift toward annual GPU platform releases, such as the Rubin platform announced in 2026, which enables quicker advancements in AI and computing hardware compared to traditional two-year cycles. NVIDIA fosters this agility through collaborations with academic institutions, including an affiliate membership in the Stanford Center for Image Systems Engineering (SCIEN) and joint projects on AI and graphics research with Stanford University. To support these efforts, NVIDIA recruits for roles in performance modeling, SoC simulation, and hardware emulation engineering. Examples include Hardware Performance Modeling Architect in Tel Aviv and Yokneam, Israel, developing cycle-accurate simulation models for network switch ASICs to analyze AI and HPC workloads; Senior Chip Design Hardware Emulation Engineer in the same locations, handling emulation, prototyping, verification, and software enablement for complex chip designs; and GPU and SoC Modelling Architect for new college graduates in Santa Clara, California. Additional positions exist in graphics performance architecture and verification. A pivotal milestone in NVIDIA's R&D evolution was the establishment of NVIDIA Research in 2006, which expanded into AI-focused efforts, building on earlier innovations like the GPU's role in deep learning. By 2023, NVIDIA had amassed a robust intellectual property portfolio, with over 17,000 patents granted globally, reflecting its leadership in GPU and AI technologies.

Financial Performance

Revenue Growth and Market Share

NVIDIA's revenue has experienced exponential growth over the past decade, rising from $4.28 billion in fiscal year 2013 to $60.92 billion in fiscal year 2024, marking a more than 14-fold increase driven primarily by demand in data centers and AI applications. This trajectory reflects the company's pivot from gaming-focused graphics to high-performance computing, with fiscal 2024 revenue surging 126% year-over-year to $60.9 billion. A key milestone was the first quarter of fiscal 2025 (ending April 2024), where revenue jumped to $26 billion, up 262% from the prior year, underscoring the AI-driven boom. The data center segment has been the primary engine of this expansion, growing from approximately 40% of total revenue in fiscal 2021 to over 78% in fiscal 2024, with $47.5 billion in sales representing a 217% increase from the previous year. This shift was fueled by the AI surge, which propelled NVIDIA's market capitalization to exceed $1 trillion in May 2023 for the first time, and later surpass $2 trillion in February 2024, positioning it among the world's most valuable companies. NVIDIA (NVDA) stock price in February 2026 was approximately $185–$190 per share. On February 2, 2026, it closed at $185.61, with intraday trading around $185–$190. Analysts maintained a strong Buy consensus (90% Buy ratings), with 1-year price targets averaging around 260260-264 (median ~$250), implying 30-40% upside. Recent updates included multiple target increases (e.g., to $275 by several firms) driven by AI demand. Earnings forecasts showed robust growth: FY2026 revenue ~$213B (up 63% YoY), FY2027 ~$323B. In early February 2026, optimistic outlooks for NVDA's performance throughout 2026 cited sold-out GPU production capacity, the upcoming Rubin chip architecture, potential resumption of sales in China with an estimated $60-80 billion revenue opportunity, reasonable valuation at 24x fiscal 2027 earnings, and NVIDIA's strong historical track record in AI-driven growth. NVIDIA announced a $2 billion investment in CoreWeave and planned significant investments in OpenAI, though some deals have faced delays amid economic and regulatory considerations. For historical and projected price charts, refer to financial platforms such as Yahoo Finance or TradingView. The adjusted closing price for NVDA stock on January 3, 2000, accounting for all subsequent stock splits, was $0.09. In terms of market share, NVIDIA maintained dominance in discrete GPUs for gaming, holding 80-90% of the market in 2023, bolstered by its GeForce product line. For AI accelerators, the company commanded approximately 95% share by 2023, capturing nearly all major data center GPU workloads due to its CUDA ecosystem and Hopper architecture. Earlier, the cryptocurrency mining boom in 2017 significantly impacted revenue, driving substantial growth in the gaming segment during fiscal 2018 quarters as GPUs were repurposed for mining, though this led to volatility following the 2018 bust.

R&D Expenses and Efficiency

NVIDIA's research and development (R&D) expenses have demonstrated increasing absolute spending with a fluctuating ratio to revenue in the 2020s, reflecting economies of scale driven by its dominance in AI hardware, particularly following a peak ratio in fiscal year 2023 and subsequent decline. In fiscal year 2023, the company reported R&D expenditures of $7.3 billion against total revenue of $26.97 billion, resulting in an R&D ratio of approximately 27%, though this figure moderated to around 14% in fiscal 2024 and further to about 10% in fiscal 2025 as revenue surged from AI demand. This trend in the R&D ratio showed fluctuations post-2018, coinciding with NVIDIA's strategic pivot toward AI and data center solutions, where the ratio was approximately 20% in fiscal 2019, rose to around 26% in fiscal 2020, and reached a peak of 27% in fiscal 2023 before declining below 10% by fiscal 2025 due to explosive revenue growth outpacing expense increases. The efficiency stems from a high revenue base, particularly the $15.01 billion in data center revenue achieved in fiscal 2023, which amortizes fixed R&D costs across a broader scale in chip design and innovation. Compared to competitors, NVIDIA's R&D ratio varied between 10-27% in the 2020s, lower on average than AMD's approximately 25%, attributable to AMD's need for catch-up investments in GPU and AI technologies, while NVIDIA benefits from its established market leadership enabling more efficient scaling. Similarly, Intel's higher ratio, around 27%, arises from its diversified portfolio spanning CPUs, foundry operations, and broader semiconductor R&D, contrasting with NVIDIA's focused AI-driven approach that yields greater efficiency per dollar spent. NVIDIA's absolute R&D spending nearly doubles that of AMD and is outpaced only by Intel's combined outlay exceeding both rivals, yet NVIDIA achieves superior efficiency through its AI revenue dominance.

Acquisitions and Investments

NVIDIA has pursued a strategy of growth through strategic acquisitions and investments, particularly to enhance its capabilities in graphics, networking, and AI technologies. One of its earliest significant acquisitions was that of 3dfx Interactive, a pioneering graphics chip company, in 2000. Under the agreement, NVIDIA paid $70 million in cash and 1 million shares of its common stock, valued at approximately $37.4 million at the time, for 3dfx's patents, brand names, and intellectual property related to 3D graphics technology. This move allowed NVIDIA to integrate key technologies and personnel from 3dfx, strengthening its position in the competitive graphics market during the early days of consumer GPUs. In more recent years, NVIDIA has focused on expanding into data center and networking domains through larger deals. A landmark acquisition was Mellanox Technologies in 2020, completed for a total value of $7 billion, which included $125 per share in cash for all outstanding common shares. This acquisition integrated Mellanox's high-performance networking solutions, such as InfiniBand and Ethernet technologies, into NVIDIA's portfolio, enabling end-to-end acceleration for AI and cloud computing infrastructures. The integration of Mellanox has had a positive impact on NVIDIA's financials, contributing to significant growth in its networking revenue segment post-acquisition. Complementing the Mellanox deal, NVIDIA acquired Cumulus Networks in 2020 to bolster its software-defined networking offerings. Cumulus, a startup specializing in Linux-based networking operating systems for data centers, was integrated to optimize NVIDIA's full networking stack from chips to software. This acquisition enhanced NVIDIA's ability to deliver programmable and accelerated networking solutions for modern data centers. NVIDIA also attempted a major expansion into mobile and embedded computing with its 2020 bid to acquire Arm Holdings, valued at $40 billion, consisting of $12 billion in cash and approximately $21.5 billion in NVIDIA stock, paid to SoftBank Group, Arm's parent. However, the deal faced intense regulatory scrutiny from the U.S. Federal Trade Commission (FTC), which sued to block it in 2021 over antitrust concerns regarding competition in chip design, and from the UK's Competition and Markets Authority (CMA), which launched an inquiry citing potential harm to innovation in the semiconductor sector. Ultimately, the merger was abandoned in February 2022, with the CMA cancelling its inquiry after NVIDIA and SoftBank mutually terminated the agreement. Beyond acquisitions, NVIDIA has made targeted investments in emerging AI technologies and startups. In 2023, it invested $50 million in Recursion Pharmaceuticals, a biotech firm using AI for drug discovery, as part of a multi-year collaboration to develop foundation models accelerating pharmaceutical research. In 2025, NVIDIA dramatically expanded its investments in European technology companies, participating in 14 funding rounds totaling more than $5.5 billion.

Market Impact and Controversies

Dominance in AI Market

NVIDIA has established overwhelming dominance in the AI chip market, controlling approximately 98% of the data center GPU revenue share in 2023 through shipments of 3.76 million units, maintaining around 92% as of 2025. This leadership is bolstered by enabling technologies such as DGX Cloud, which delivers high-performance AI infrastructure optimized for hyperscalers and enterprise workloads, providing full-stack intelligence with hyperscale efficiency across clouds. NVIDIA's GPUs power critical AI applications, including the training of large language models like ChatGPT, where thousands of NVIDIA GPUs operate in parallel to handle the immense computational demands of such systems. This role in foundational AI developments has directly contributed to NVIDIA's stock valuation surge, driven by AI hype, culminating in a peak market capitalization of $3 trillion in June 2024. A key factor in NVIDIA's AI ecosystem is the developer lock-in facilitated by CUDA, its proprietary parallel computing platform that encompasses compilers, runtime libraries, debugging tools, and domain-specific frameworks, creating a formidable moat around its hardware. This ecosystem is further strengthened through strategic partnerships, such as long-standing collaborations with AWS since 2010 to deliver GPU-accelerated AI solutions and integrations like NVIDIA Run:ai on Microsoft Azure for streamlined AI infrastructure management. These alliances enable seamless deployment of NVIDIA technologies in major cloud services, reinforcing developer and enterprise adoption while expanding AI accessibility for hyperscalers and service providers. Looking ahead, NVIDIA's AI revenue projections indicate robust growth, with Data Center revenue reaching a record $51.2 billion in the third quarter of fiscal 2026 and total revenue expectations at $65 billion for the fourth quarter, underscoring the sustained demand for its AI solutions. However, emerging challenges from open-source alternatives like RISC-V, which is gaining traction as a customizable ISA for AI hardware and even seeing NVIDIA's own porting of CUDA to it in 2025, could introduce competition by offering more flexible, royalty-free options to disrupt proprietary ecosystems. Despite these pressures, NVIDIA's entrenched position positions it for continued leadership, potentially driving AI revenue well beyond current quarterly highs into 2026 and beyond. NVIDIA has faced significant antitrust scrutiny, particularly regarding its proposed acquisition of Arm Holdings. In December 2021, the U.S. Federal Trade Commission (FTC) filed a lawsuit to block NVIDIA's $40 billion acquisition of the UK-based chip designer Arm, alleging that the deal would stifle competition in markets for semiconductors used in mobile devices, data centers, automotive applications, and gaming consoles. The investigation highlighted concerns over NVIDIA gaining control over Arm's intellectual property, which could disadvantage competitors relying on Arm's designs. Ultimately, the acquisition was terminated in February 2022 amid regulatory opposition from multiple jurisdictions, including the FTC's ongoing challenge. More recently, in 2024, the U.S. Department of Justice (DOJ) initiated an antitrust probe into NVIDIA's dominance in the AI chip market, focusing on allegations of anticompetitive practices such as bundling AI chips with software and imposing restrictive supply terms on customers. The investigation, which escalated with subpoenas issued in September 2024, examines whether NVIDIA's market position—controlling over 80% of AI accelerators—has led to unfair pricing and exclusion of rivals. This probe underscores broader concerns about monopolistic behavior in the rapidly growing AI sector. NVIDIA has been involved in several patent disputes, including a high-profile lawsuit with Intel in the late 2000s over graphics processing technology. In 2008, NVIDIA sued Intel for breaching a chipset licensing agreement by incorporating unauthorized graphics features into its processors, leading to a countersuit from Intel. The dispute was resolved in 2011 when Intel agreed to pay NVIDIA $1.5 billion in licensing fees as part of a six-year cross-license agreement covering patents related to graphics and computing technologies. Ethical concerns have arisen from NVIDIA's GPUs being heavily utilized in cryptocurrency mining, prompting measures to restrict resale and mitigate market distortions. In 2018, amid a surge in mining demand that boosted GPU sales, NVIDIA implemented policies prohibiting partners from publicly promoting products for cryptocurrency mining and limiting resale of high-end GPUs to non-gaming buyers. This led to class-action lawsuits alleging that NVIDIA failed to adequately disclose the significant contribution of crypto-related revenues—estimated at over $1 billion from May 2017 to July 2018—to its gaming segment, misleading investors about the sustainability of growth. The U.S. Securities and Exchange Commission (SEC) fined NVIDIA $5.5 million in 2022 for these disclosure inadequacies, and the class-action suit was allowed to proceed following the U.S. Supreme Court's dismissal of NVIDIA's appeal in December 2024. Additionally, U.S. export controls imposed in 2022 on advanced AI chips have raised ethical and compliance issues for NVIDIA, restricting sales of high-performance GPUs to China to prevent military applications. These controls, enacted by the U.S. Department of Commerce, required NVIDIA to obtain licenses for exporting chips capable of significant AI training, impacting its global supply chain and prompting the development of lower-performance variants like the A800 and H800 for the Chinese market to comply with the initial restrictions. However, additional controls in October 2023 further restricted the export of the H800 and similar chips.

Environmental and Social Responsibility

NVIDIA has committed to achieving net-zero carbon emissions as a company by 2040, aligning its operations and supply chain with global climate goals to limit temperature rise to 1.5 degrees Celsius. As part of this effort, the company aims to source 100% renewable electricity for its global operations and data centers by the end of fiscal year 2025, having already reached 44% renewable sourcing in fiscal year 2023 through green tariffs and energy attribute certificates. NVIDIA's annual ESG reports, such as the Fiscal Year 2023 Corporate Responsibility Report and the Fiscal Year 2025 Sustainability Report, detail progress on these initiatives, including supply chain audits where over 60% of key silicon and systems manufacturing suppliers reported using renewable energy in fiscal year 2023. On the social responsibility front, the NVIDIA Foundation supports STEM education through grants and programs, donating nearly $27 million in fiscal year 2025 to various initiatives, including those focused on AI literacy and hands-on learning for underrepresented communities. The foundation's efforts extend to global cancer research and mentoring programs, emphasizing equitable access to technology education. Regarding workforce diversity, NVIDIA reported that women comprised 19.2% of its global employees as of the end of fiscal year 2023, with similar representation among new hires at 19.9% and efforts underway to increase female participation in leadership roles, where women held 11.4% of positions. These statistics reflect ongoing inclusion programs, including mentoring and job-shadowing opportunities tailored for diverse groups. Despite these advancements, NVIDIA faces challenges related to the high energy demands of AI data centers, which contribute to data centers' overall electricity consumption estimated at about 1.5% of global totals as of recent assessments. The company's accelerated computing solutions, such as GPUs, are designed to improve energy efficiency—up to 20 times more efficient than CPUs for certain AI workloads—helping mitigate this impact across its global operations.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.