AI - TechHQ Technology and business Mon, 25 Mar 2024 17:17:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 How Bulk Data Centers is keeping its cool with soaring AI data demands https://techhq.com/2024/03/how-bulk-data-centers-is-keeping-its-cool-with-soaring-ai-data-demands/ Mon, 25 Mar 2024 14:06:19 +0000 https://techhq.com/?p=232592

Research by the IDC indicates that the growing use of AI technology will require storage capacity in data centers to reach 21.0 zettabytes by 2027. More storage necessitates more energy, and, as a result, data centers are trying to manage growing customer workloads while pre-empting future technological advancements that will further increase infrastructure requirements. In... Read more »

The post How Bulk Data Centers is keeping its cool with soaring AI data demands appeared first on TechHQ.

]]>

Research by the IDC indicates that the growing use of AI technology will require storage capacity in data centers to reach 21.0 zettabytes by 2027. More storage necessitates more energy, and, as a result, data centers are trying to manage growing customer workloads while pre-empting future technological advancements that will further increase infrastructure requirements.

Data centers AI

Source: Bulk Data Centers

In addition, climate change will continue to impact businesses and communities worldwide, partly due to rising energy demands. Therefore, it is imperative that the environmental impact of the AI surge is addressed, for example, by implementing state-of-the-art cooling technology and optimizing data center site locations.

Bulk Data Centers, a builder and operator of Nordic data centers, is an example of a company taking steps to address growing energy demands sustainably. TechHQ spoke with Rob Elder, the company’s Chief Commercial Officer (CCO), to find out more about the innovative strategies and technologies implemented to achieve this goal.

Increasing rack density over horizontal expansion

In mid-2023, Bulk Data Centers invested heavily in USystems’ CL20 Active Rear Door Heat Exchangers to support new, power-dense technologies needed to meet customer demands. These include GPU-based hardware, which requires up to 50 kW of power per rack.

Mr Elder told TechHQ: “Customers were asking for density that was beyond the capabilities of traditional systems, plus they wanted more flexibility to ramp up density when needed. This was driven by the ever-growing power of the GPUs and CPUs that customers use.”

Increasing the rack density is a more environmentally friendly way of accommodating larger workloads than horizontal expansion because cooling tends to be more efficient at higher temperatures.

Data centers AI

Source: Bulk Data Centers

Mr Elder said: “Some operators try to spread out high-density workloads because they don’t have the cooling systems to accommodate it, but the problem with that is you use more materials to distribute pipework and cables over a longer distance. You also need more real estate because you need a bigger building.

“So actually, by increasing the density, you benefit from smaller buildings, and you densify the infrastructure. All of that further reduces the impact of operations.”

Choosing a site to lower cooling requirements

Bulk Data Centers puts a particular focus on sustainability when selecting sites for expansion. Its facilities are in the Nordics, which benefit from year-round low temperatures and provide natural cooling to racks. “You can increase the number of hours that you’re not running mechanical cooling, which reduces the reliance on electricity,” said Mr Elder.

Electricity in these countries carries a smaller environmental footprint compared to others due to their abundant renewable energy resources. For instance, in Norway, where Bulk has several data center facilities, electricity is sourced from renewable energy.

Mr Elder added: “We’re also assessing supply chain components for environmental impact via procurement, including equipment suppliers and using materials like green steel in construction.”

Employing energy-efficient rear door coolers

While Norway is a naturally cool place, it is not enough to rely entirely on direct air cooling, and Mr Elder said that Bulk Data Centers has “always recognized the limitations of conventional cooling.

USystems’ CL20 Active Rear Door Heat Exchanger units reclaim 15 percent of computing power compared to traditional cooling and save over 50,000 trees-worth of carbon per 1 MW deployment. With adaptive intelligence controlling Bulk Data Centers’ room temperatures, the system minimized risks associated with high-density computing and provided predictable air management. Therefore, it aligned seamlessly with Norway’s energy-efficient ethos and low-carbon hydro energy at Bulk’s N01 campus in Kristiansand, Southern Norway.

“We could offer a peak density beyond what the average was because each door has its own capacity, capturing energy directly from the back of each rack,” said Mr Elder. “Our first deployment with a customer gave an average of 40 kW of rack, but a peak of up to 60 kW.”

Data centers AI

Source: Bulk Data Centers

Boosting the peak density allowed Bulk Data Centers to be more flexible in accommodating customers’ power demands. “Having a standard design when we don’t know exactly what our customers are going to be deploying means we’ve got that flexibility and can still meet that short timeframe they require,” he said. “That’s why we’ve continued to develop our relationship with USystems because flexibility and speed are important to customers.”

What’s next for data center operators?

Looking to the future, the CCO emphasized how important it is for data center operators to stay on their toes when it comes to adaptation and innovation. “We’re at the beginning of what seems to be a dramatic shift.”

“Designs will keep evolving, reaching for higher density and necessitating a blend of water-cooled, direct-to-chip, and air-cooled systems.”

Despite growth, the environmental impact must also be considered. “With the massive IT loads that we’re witnessing and the escalating power demands, there’s a growing need for awareness regarding environmental impact.”

As power demands rise and environmental considerations become increasingly crucial, Legrand’s solutions stand ready to future-proof data center infrastructure while minimizing its carbon footprint. Discover how the customizable solutions, endorsed by industry leaders and equipped with adaptive intelligence, could revolutionize your operations by visiting the Legrand website today.

The post How Bulk Data Centers is keeping its cool with soaring AI data demands appeared first on TechHQ.

]]>
Hugging Face Safetensors vulnerable to supply chain attacks https://techhq.com/2024/03/hugging-face-safetensors-vulnerable-to-supply-chain-attacks/ Thu, 07 Mar 2024 09:30:55 +0000 https://techhq.com/?p=232569

• Hugging Face vulnerabilities revealed. • Supply chain attacks can get into Hugging Face safetensors. • That means the whole Hugging Face community could be under threat. Recent research has found that the new Hugging Face Safetensors conversion services are vulnerable to supply chain attacks, with hackers able to hijack AI models submitted by users.... Read more »

The post Hugging Face Safetensors vulnerable to supply chain attacks appeared first on TechHQ.

]]>

• Hugging Face vulnerabilities revealed.
• Supply chain attacks can get into Hugging Face safetensors.
• That means the whole Hugging Face community could be under threat.

Recent research has found that the new Hugging Face Safetensors conversion services are vulnerable to supply chain attacks, with hackers able to hijack AI models submitted by users. Reported by The Hacker News, cybersecurity researchers from HiddenLayer discovered that it is “possible to send malicious pull requests with attacker-controlled data from the Hugging Face service to any repository on the platform.” The researchers also found that it was possible to “hijack any models that are submitted through the conversion service.”

For those who don’t know, Hugging Face is a collaboration platform used by software developers to host and work together on an infinite number of datasets, machine learning models, and applications, all of which are pre-trained. Users can build, implement, and train these to their choosing.

Vulnerabilities in Hugging Face

Safetensors, a format designed by Hugging Face, store tensors, prioritizing security. Users can also convert PyTorch models to Safetensor through a pull request if desired. Safetensors is in contrast to “pickles,” another format, which may have been exploited by malicious actors to deploy tools such as Mythic and Cobalt Strikes, and run unauthorized code.

The recent revelation of possible vulnerabilities comes as a shock to many of Hugging Face’s 1.2 million registered users. It became evident through the research that malicious pull requests could be accomplished via a hijacked model. Since the service should convert this model, it enables harmful actors to pose as the conversion bot and request modifications to any repository on the platform.

It’s also possible for hackers to extract tokens associated with SFConvertbot. This is a bot made to generate a pull request. These tokens can be extracted, sending out a dangerous pull request to any repository on the Hugging Face site. From here, a threat actor could manipulate the model, even implanting neural backdoors.

According to researchers, “an attacker could run any arbitrary code any time someone attempted to convert their model.” Essentially, a model could be hijacked upon conversion without the user even knowing it.

An attack could result in the theft of a user’s Hugging Face token if they try to convert their personal repository. Hackers may also be able to access datasets and internal models, resulting in malicious interference.

The complexities of these vulnerabilities don’t stop there. An adversary could exploit the ability for any users to submit a conversion request for a public repository, resulting in a possible modification or hijacking of a widely utilized model. That poses a substantial risk to the overall supply chain. Researchers summed this up by saying, “the conversion service has proven to be vulnerable and has had the potential to cause a widespread supply chain attack via the Hugging Face official service.”

Attackers could get access to a container that runs the service, and choose to compromise any models that have been converted by it.

Hugging Face - traditionally, bad things happen afterwards...

Hugging Face – traditionally, bad things happen afterwards…

The implications go beyond singular repositories. The overall trustworthiness and reliability of the Hugging Face service and its community is under threat.

Co-founder and CEO of Hidden Layer, Chris “Tito” Sestito, emphasized the effects this vulnerability could have on a wider scale, saying, “This vulnerability extends beyond any single company hosting a model. The compromise of the conversion service has the potential to rapidly affect the millions of users who rely on these models to kick-start their AI projects, creating a full supply chain issue. Users of the Hugging Face platform place trust not only in the models hosted there but also in the reputable companies behind them, such as Google and Microsoft, making them all the more susceptible to this type of attack.”

LeftoverLocals

Hidden Layer’s exposure to certain vulnerabilities comes just one month after Trail of Bits revealed a vulnerability known as LeftoverLocals (CVE-2023-4969, Common Vulnerability Scoring System (CVSS) score – 6.5). This particular security flaw enables the retrieval of data from general-purpose graphics processing units (GPGPUs), manufactured by Apple, AMD, Qualcomm, and Imagination. The CVSS score of 6.5 indicates that this vulnerability was on a moderate level of severity, putting sensitive data at risk.

Trail of Bits’ memory leak stemmed from a failure to isolate process memory. Therefore, a local attacker could gain access and read memory from various processes. This includes the interactive sessions of other users within a Large Language Model (LLM).

The Hugging Face vulnerabilities, as well as those at Trail of Bits, only emphasizes the need for AI technologies to have stricter security protocols in place. Currently, the adoption of AI is growing at such a rate that sufficient security measures cannot keep up. HiddenLayer is one company that is creating solutions for such shortcomings, with its AISec platform offering a range of products designed to protect ML models against malicious code injections and attacks.

Nevertheless, the revelation of Hugging Face’s Safetensors conversion tool issues gives us a stark reminder of the challenges faced by AI and machine learning sectors. Supply chain attacks could put the integrity of AI models at risk, as well as the ecosystems that rely on such technologies. Right now, investigations are continuing into the vulnerability, with the machine learning community on high alert, and more vigilant than ever before.

The post Hugging Face Safetensors vulnerable to supply chain attacks appeared first on TechHQ.

]]>
US aims for chip supremacy: From zero to 20% by 2030 https://techhq.com/2024/02/us-aims-for-chip-supremacy-from-zero-to-20-by-2030/ Wed, 28 Feb 2024 15:30:09 +0000 https://techhq.com/?p=232382

The US wants to regain its leadership within the chip industry, and Commerce Sec. Raimondo targets 20% domestic production of leading-edge chips by 2030. The US currently produces none; hence, the ambitious goal is set for the end of this decade. Biden admin aims to bring memory chip production to the US “at scale.” As... Read more »

The post US aims for chip supremacy: From zero to 20% by 2030 appeared first on TechHQ.

]]>
  • The US wants to regain its leadership within the chip industry, and Commerce Sec. Raimondo targets 20% domestic production of leading-edge chips by 2030.
  • The US currently produces none; hence, the ambitious goal is set for the end of this decade.
  • Biden admin aims to bring memory chip production to the US “at scale.”

As the global demand for semiconductors surges, the US has embarked on a bold mission to revitalize its chip manufacturing industry. Last February, the Commerce Department launched the CHIPS for America program, echoing the ambitious spirit of the space race era. While US companies lead in AI development, the absence of domestic chip production poses a critical challenge. However, with a strategic focus on talent development, R&D, and manufacturing, the US aims to fill this gap and produce 20% of the world’s leading-edge chips by 2030. 

Commerce Secretary Gina Raimondo remains optimistic about the program’s potential to transform America’s industrial landscape. The US aims to fortify its supply chains and reduce reliance on geopolitical rivals by investing in leading-edge logic chip manufacturing and onshoring memory production. “Our investments in leading-edge logic chip manufacturing will put this country on track to produce roughly 20% of the world’s leading-edge logic chips by the end of the decade,” Commerce Secretary Gina Raimondo said during a speech at the Center for Strategic and International Studies (CSIS) on February 26, 2024.

“That’s a big deal,” Raimondo added. “Why is that a big deal? Because folks, today we’re at zero.” Her speech came a year after the initiation of funding applications under the 2022 CHIPS and Science Act by the US Department of Commerce. With a staggering US$39 billion earmarked for manufacturing incentives, the stage has been set for a transformative journey in the semiconductor landscape. 

US Commerce Secretary Gina Raimondo speaks during the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, in central England, on November 1, 2023. (Photo by TOBY MELVILLE/POOL/AFP).

US Commerce Secretary Gina Raimondo speaks during the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park. (Photo by TOBY MELVILLE/POOL/AFP).

Raimondo’s ambitious vision, unveiled concurrently, delineates the path ahead. By 2030, the US aims to spearhead the design and manufacture of cutting-edge chips, establishing dedicated fabrication plant clusters to realize this audacious objective. She claims that, besides everything else, there has been a significant shift in the need for advanced semiconductor chips due to AI. 

“When we started this, generative AI wasn’t even part of our vocabulary. Now, it’s everywhere. Training a single large language model takes tens of thousands of leading-edge semiconductor chips. The truth is that AI will be the defining technology of our generation. You can’t lead in AI if you don’t lead in making leading-edge chips. And so our work in implementing the CHIPS Act became much more important,” Raimondo emphasized.

If the US achieves its goals, it will result in “hundreds of thousands of good-paying jobs,” Raimondo said Monday. “The truth of it is the US does lead, right? We do lead. We lead in the design of chips and the development of large AI language models. But we don’t manufacture or package any leading-edge chips that we need to fuel AI and our innovation ecosystem, including chips necessary for national defense. We don’t make it in America, and the brutal fact is the US cannot lead the world as a technology and innovation leader on such a shaky foundation,” she iterated.

Why is there a gap between US and chip manufacturing?

The US grappled with a significant gap in chip manufacturing for several reasons. Firstly, many semiconductor companies outsourced their manufacturing operations overseas to cut costs, leading to a decline in domestic chip production capacity. Secondly, as semiconductor technology advanced, the complexity and cost of building cutting-edge fabrication facilities increased, discouraging investment in new fabs. 

Meanwhile, global competitors like Taiwan, South Korea, and China expanded their semiconductor industries rapidly, intensifying competition. While other countries provided substantial government support to their semiconductor industries, the US fell behind. Then, there were regulatory hurdles, and environmental regulations make building and operating semiconductor fabs in the US challenging and costly. 

A combination of outsourcing, technological challenges, global competition, lack of government support, and regulatory issues contributed to the US’s gap in chip manufacturing, with none of the world’s leading-edge chips being produced domestically.

And then the world woke up one morning in dire need of leading-edge chips to underscore the technology behind the next industrial revolution, and America realized its mistake.

“We need to make these chips in America. We need more talent development in America. We need more research and development in America and just a lot more manufacturing at scale,” Raimondo said in her speech at CSIS.

2030 vision: prioritizing future-ready projects

US President Joe Biden greets attendees after delivering remarks on his economic plan at a TSMC chip manufacturing facility in Phoenix, Arizona, on December 6, 2022. (Photo by Brendan SMIALOWSKI/AFP).

US President Joe Biden greets attendees at a TSMC chip manufacturing facility. (Photo by Brendan SMIALOWSKI/AFP).

In Raimondo’s speech, she declared that the US will first prioritize projects that will be operational by the end of this decade. “I want to be clear: there are many worthy proposals that we’ve received with plans to come online after 2030, and we’re saying no, for now, to those projects because we want to maximize our impact in this decade,” she clarified.

In short, the US will give way to “excellent projects that could come online this year” instead of granting incentives to projects that will come online in 10 or 12 years from now. She also referred back to the goal mentioned last year – when the US is all said and done with this CHIPS initiative – is to have at least two new large-scale clusters of leading-edge logic fabs, each of those clusters employing thousands of workers. 

“I’m pleased to tell you today we expect to exceed that target,” she claimed. So far, the Commerce Department has awarded grants to three companies in the chip industry as part of the CHIPS Act: BAE Systems, Microchip Technology, and, most recently, a significant US$1.5 billion grant to GlobalFoundries. Additional funding is anticipated for Taiwan Semiconductor Manufacturing Co. and Samsung Electronics as they establish new facilities within the US.

Raimondo also highlighted her nation’s commitment to supporting the production of older-generation chips, referred to as mature-node or legacy chips. “We’re not losing sight of the importance of current generation and mature node chips, which you all know are essential for cars, medical devices, defense systems, and critical infrastructure.”

Yet, the lion’s share of investments, totaling US$28 billion out of US$39 billion, is earmarked for leading-edge chips. Raimondo emphasized that this program aims for targeted investments rather than scattering funds wisely. She disclosed that the department has received over US$70 billion in requests from leading-edge companies alone.

For now, anticipation is high for the Commerce Department’s new round of grant announcements, scheduled to coincide with President Joe Biden’s State of the Union address on March 7. Among the expected recipients is TSMC, which is establishing new Arizona facilities.

The post US aims for chip supremacy: From zero to 20% by 2030 appeared first on TechHQ.

]]>
Samsung seizes 2nm AI chip deal, challenging TSMC’s reign https://techhq.com/2024/02/samsung-seizes-2nm-ai-chip-deal-challenging-tsmc/ Tue, 20 Feb 2024 09:30:46 +0000 https://techhq.com/?p=232206

The inaugural deal for 2nm chips marks a significant milestone for Samsung, signaling a challenge to TSMC and its dominance. The deal could significantly change the power balance in the industry. Samsung has a strategy to offer lower prices for its 2nm process, reflecting its aggressive approach to attracting customers, particularly eyeing Qualcomm’s flagship chip... Read more »

The post Samsung seizes 2nm AI chip deal, challenging TSMC’s reign appeared first on TechHQ.

]]>
  • The inaugural deal for 2nm chips marks a significant milestone for Samsung, signaling a challenge to TSMC and its dominance.
  • The deal could significantly change the power balance in the industry.
  • Samsung has a strategy to offer lower prices for its 2nm process, reflecting its aggressive approach to attracting customers, particularly eyeing Qualcomm’s flagship chip orders.

In the race for technological supremacy and market dominance, Taiwan Semiconductor Manufacturing Company (TSMC) and Samsung Electronics lead the charge in semiconductor manufacturing. As demand for advanced chips surges in the 5G, AI, and IoT era, competition intensifies, driving innovation. Both companies vie to achieve smaller nanometer nodes, which are pivotal for technological advancement. 

When it comes to semiconductor innovation, TSMC spearheads the charge, with ambitious plans for 3nm and 2nm chips, promising a leap in performance and efficiency. Meanwhile, Samsung, renowned for its memory chip prowess, is mounting a determined challenge to TSMC’s supremacy. Recent reports suggest that Samsung is on the brink of unveiling its 2nm chip technology, marking a significant milestone in its bid to rival TSMC.

In a notable turn of events disclosed during Samsung’s Q4 2023 financial report, the tech world buzzed with news of Samsung’s foundry division securing a prized contract for 2nm AI chips. Amid speculation, Samsung maintained secrecy about the identity of this crucial partner.

But earlier this week, a revelation from Business Korea unveiled that the patron happens to be Japanese AI startup Preferred Networks Inc. (PFN). Since its launch in 2014, PFN has emerged as a powerhouse in AI deep learning, drawing substantial investments from industry giants like Toyota, NTT, and FANUC, a leading Japanese robotics firm.

Samsung vs TSMC

Samsung, headquartered in Suwon, South Korea, is set to unleash its cutting-edge 2nm chip processing technology to craft AI accelerators and other advanced AI chips for PFN, as confirmed by industry insiders on February 16, 2024. 

Should news of this landmark deal be legitimate, it would prove mutually advantageous. It would empower PFN with access to state-of-the-art chip innovations for a competitive edge while propelling Samsung forward in its fierce foundry market rivalry with TSMC, according to insider reports.

Ironically, PFN has had a longstanding partnership with TSMC dating back to 2016, but is opting to shift gears from here on out, going with Samsung’s 2nm node for its upcoming AI chip lineup, according to a knowledgeable insider. PFN also chose Samsung over TSMC due to Samsung’s full-service chip manufacturing capabilities, covering everything from chip design to production and advanced packaging, sources revealed.

Experts also speculate that although TSMC boasts a more extensive clientele for 2nm chips, PFN’s strategic move to Samsung hints at a potential shift in the Korean giant’s favor. This pivotal decision may pave the way for other significant clients to align with Samsung, altering the competitive landscape in the chipmaking realm.

No doubt, in the cutthroat world of contract chipmaking, TSMC reigns supreme, clinching major deals with industry giants like Apple Inc. and Qualcomm Inc. But, as the demand for top-tier chips escalates, the race for technological superiority heats up, with TSMC and Samsung at the forefront of the battle. While TSMC currently leads the pack, boasting 2nm chips for clients like Apple and Nvidia, Samsung is hot on its heels. 

“Apple is set to become TSMC’s inaugural customer for the 2nm process, positioning TSMC at the forefront of competition in the advanced process technology,” TrendForce said in its report. Meanwhile, according to Samsung’s previous roadmap, its 2nm SF2 process is set to debut in 2025. 

The Samsung Foundry Forum (SFF) plan could challenge TSMC.

Samsung’s Foundry Forum (SFF) plan.

“As stated in Samsung’s Foundry Forum (SFF) plan, Samsung will begin mass production of the 2nm process (SF2) in 2025 for mobile applications, expand to high-performance computing (HPC) applications in 2026, and further extend to the automotive sector and the expected 1.4nm process by 2027,” TrendForce noted.

Compared to the second-generation 3GAP process at 3nm, it offers a 25% improvement in power efficiency at the same frequency and complexity and a 12% performance boost at the same power consumption and complexity while reducing chip area by 5%. In short, with TSMC eyeing mass production of 2nm chips by 2025, the competition between these tech titans is set to reach new heights.

Yet, in a strategic maneuver reported by the Financial Times, Samsung is gearing up to entice customers with discounted rates for its 2nm process, a move poised to shake up the semiconductor landscape. With its sights set on Qualcomm’s flagship chip production, Samsung aims to lure clients away from TSMC by offering competitive pricing. 

This bold initiative signals Samsung’s determination to carve out a larger market share and challenge TSMC’s dominance in the semiconductor industry.

The post Samsung seizes 2nm AI chip deal, challenging TSMC’s reign appeared first on TechHQ.

]]>
Meta is gearing up to join the AI chips race https://techhq.com/2024/02/the-ai-chips-race-is-about-to-get-intense-with-metas-artemis/ Tue, 06 Feb 2024 09:30:44 +0000 https://techhq.com/?p=231881

Ultimately, Meta wants to break free from Nvidia’s AI chips while challenging other tech giants making their silicon. Meta expects an additional US$9 billion on AI expenditure this year, beyond the US$30 billion annual investment. Will Artemis mark a decisive break from Nvidia, after Meta hordes H100 chips? A whirlwind of generative AI innovation in... Read more »

The post Meta is gearing up to join the AI chips race appeared first on TechHQ.

]]>
  • Ultimately, Meta wants to break free from Nvidia’s AI chips while challenging other tech giants making their silicon.
  • Meta expects an additional US$9 billion on AI expenditure this year, beyond the US$30 billion annual investment.
  • Will Artemis mark a decisive break from Nvidia, after Meta hordes H100 chips?

A whirlwind of generative AI innovation in the past year alone has exposed major tech companies’ profound reliance on Nvidia. Crafting chatbots and other AI products has become an intricate dance with specialized chips largely made by Nvidia in the preceding years. Pouring billions of dollars into Nvidia’s systems, the tech behemoths have found themselves straining against the chipmaker’s inability to keep pace with the soaring demand. Faced with this problem, industry titans like Amazon, Google, Meta, and Microsoft are trying to seize control of their fate by forging their own AI chips. 

After all, in-house chips would enable the giants to steer the course of their own destiny, slashing costs, eradicating chip shortages, and envisioning a future where they offer these cutting-edge chips to businesses tethered to their cloud services -creating their own silicon fiefdoms, rather than being entirely dependent on the likes of Nvidia (and potentially AMD and Intel).

The most recent tech giant to announce plans to go solo is Meta, which is rumored to be developing a new AI chip, “Artemis,” set for release later this year. 

The chip, designed to complement the extensive array of Nvidia H100 chips recently acquired by Meta, aligns with the company’s strategic focus on inference—the crucial decision-making facet of AI. While bearing similarities to the previously announced MTIA chip, which surfaced last year, Artemis seems to emphasize inference over training AI models. 

The H100 Tensor Core.

H100 Tensor Core GPU. Source: Nvidia.

However, it is worth noting that Meta is entering the AI chip arena at a point when competition has gained momentum. It started with a significant move last July, when Meta disrupted the competition for advanced AI by unveiling Llama 2, a model akin to the one driving ChatGPT

Then, last month, Zuckerberg introduced his vision for artificial general intelligence (AGI) in an Instagram Reels video. In the previous earnings call, Zuckerberg also emphasized Meta’s substantial investment in AI, declaring it as the primary focus for 2024. 

2024: the year of custom AI chips by Meta?

In its quest to empower generative AI products across platforms like Facebook, Instagram, WhatsApp, and hardware devices like Ray-Ban smart glasses, the world’s largest social media company is racing to enhance its computing capacity. Therefore, Meta is investing billions to build specialized chip arsenals and adapt data centers. 

Last Thursday, Reuters got hold of an internal company document that states that the parent company of Facebook intends to roll out an updated version of its custom chip into its data centers this year. The latest iteration of the custom chip, codenamed ‘Artemis,’ is designed to bolster the company’s AI initiatives and might lessen its dependence on Nvidia chips, which presently hold a dominant position in the market. 

Mark Zuckerberg, CEO of Meta testifies before the Senate Judiciary Committee at the Dirksen Senate Office Building on January 31, 2024 in Washington, DC.

Mark Zuckerberg, CEO of Meta, testifies before the Senate Judiciary Committee on January 31, 2024 in Washington, DC. (Photo by Anna Moneymaker/GETTY IMAGES NORTH AMERICA/Getty Images via AFP).

If successfully deployed at Meta’s massive scale, an in-house semiconductor could trim annual energy costs by hundreds of millions of dollars, and slash billions in chip procurement expenses, suggests Dylan Patel, founder of silicon research group SemiAnalysis. The deployment of Meta’s chip would also mark a positive shift for its in-house AI silicon project. 

In 2022, executives abandoned the initial chip version, choosing instead to invest billions in Nvidia’s GPUs, dominant in AI training. The upside of that strategy is that Meta is poised to accumulate many coveted semiconductors. Mark Zuckerberg revealed to The Verge that by the close of 2024, the tech giant will possess over 340,000 Nvidia H100 GPUs – the primary chips used by entities for training and deploying AI models like ChatGPT. 

Additionally, Zuckerberg anticipates Meta’s collection to reach 600,000 GPUs by the year’s end, encompassing Nvidia’s A100s and other AI chips. The new AI chip by Meta follows its predecessor’s ability for inference—utilizing algorithms for ranking judgments and user prompt responses. Last year, Reuters reported that Meta is also working on a more ambitious chip that, like GPUs, could perform training and inference.

Zuckerberg also detailed Meta’s strategy to vie with Alphabet and Microsoft in the high-stakes AI race. Meta aims to capitalize on its extensive walled garden of data, highlighting the abundance of publicly shared images and videos on its platform and distinguishing it from competitors relying on web-crawled data. Beyond the existing generative AI, Zuckerberg envisions achieving “general intelligence,” aspiring to develop top-tier AI products, including a world-class assistant for enhanced productivity.

The post Meta is gearing up to join the AI chips race appeared first on TechHQ.

]]>
AI giant wave predictor – a force for good https://techhq.com/2024/01/ai-giant-wave-predictor-a-force-for-good/ Wed, 31 Jan 2024 15:24:48 +0000 https://techhq.com/?p=231762

AI may be threatening to wipe out jobs like a ten-pin bowler chasing a perfect score, but there are applications that even union leaders would agree are a force for good. And one of those is the ability of deep learning models to steer container ships away from the perils of giant waves. If you’ve... Read more »

The post AI giant wave predictor – a force for good appeared first on TechHQ.

]]>

AI may be threatening to wipe out jobs like a ten-pin bowler chasing a perfect score, but there are applications that even union leaders would agree are a force for good. And one of those is the ability of deep learning models to steer container ships away from the perils of giant waves.

If you’ve ever seen a shipping container up close, it’s hard to imagine how such a thing – weighing over two tons empty and capable of carrying a maximum payload of more than 28 tons – could tumble from its bay into the ocean. However, 2,301 containers were lost each year on average (from 2020 to 2022), according to World Shipping Council records [PDF].

One of the reasons for these losses is structural failure, but the biggest culprits are so-called rogue waves arising from natural ocean phenomena. What’s more, shipping captains can have little warning that conditions are about to become dangerous.

The master of the Queen Elizabeth 2 reportedly said that an almost 100 ft high wave ‘came out of nowhere and looked like the White Cliffs of Dover’, while sailing across the North Atlantic in 1995.

Freakishly large waves form when wave systems cross each other and trigger a process dubbed linear superposition. “If two wave systems meet at sea in a way that increases the chance to generate high crests followed by deep troughs, the risk of extremely large waves arises,” explains Dion Häfner – a Senior Research Engineer at Pasteur Labs in New York, US.

AI as a force for good

Häfner is first author in a study titled ‘Machine-Guided Discovery of a Real-World Rogue Wave Model’ submitted to arXiv in November 2023 and published in PNAS. And together with colleagues at the Institute for Simulation Intelligence –which is staffed by ‘industry-hardened experts in AI and computational sciences’ from organziations such as Deepmind, Cerebras, CERN, NASA, and other centers of excellence – they wondered whether AI could be used as a force for good to enable safer shipping.

At any given time, there can be as many as 50,000 cargo vessels navigating the waters around our planet and the goods that they carry are essential to global supply chains. The team’s goal was to provide shipping operators, and anyone with an interest in conditions out at sea, with an accurate prediction of the likelihood of giant waves.


Underpinning the group’s work, is a framework known as a causal directed acyclic graph, which takes a series of environmental conditions and relates them to sea state parameters. In turn, those sea state parameters give rise to physical effects, which can be converted into observations.

“The probability to measure a rogue wave based on the sea state can be modelled as a sum of nonlinear functions, each of which only depends on a subset of the sea state parameters representing a different causal path,” write the researchers in their paper.

Neural networks are able to model these nonlinear functions beautifully. And the group was able to feed its many-layered architecture with training data to adjust all of the various parameters. There’s a wealth of measurements and historical records that can be used to teach the model what inputs are likely to result in dangerous conditions.

The team made use of its Free Ocean Wave Dataset, which had been prepared previously for exactly this task and involved processing a buoy data catalog containing information on 4 billion waves. Of those, around 100,000 fall into the category of being potentially deadly rogue waves, which translates to around one per day occurring somewhere out at sea.

AI’s ability to spot patterns and encode the underlying causality into hidden layers that form deep neural networks becomes a force for good when directed at problems such as the prediction of supply chain disruption.

“As shipping companies plan their routes well in advance, they can use our algorithm to get a risk assessment of whether there is a chance of encountering dangerous rogue waves along the way. Based on this, they can choose alternative routes,” Häfner points out.

The post AI giant wave predictor – a force for good appeared first on TechHQ.

]]>
Biden weighs blocking China’s access to US cloud tech, fearing AI advancement https://techhq.com/2024/01/us-cloud-control-biden-eyes-blocking-china-ai-access/ Tue, 30 Jan 2024 15:00:58 +0000 https://techhq.com/?p=231735

Raimondo warns against unwanted access for China to US cloud technology to build AI. The Secretary of Commerce is acting to block use of US tech for AI by China due to “security concerns.” The move, impacting players like Amazon and Microsoft, is anticipated to escalate tech tensions with China. The long-standing rivalry between the US... Read more »

The post Biden weighs blocking China’s access to US cloud tech, fearing AI advancement appeared first on TechHQ.

]]>
  • Raimondo warns against unwanted access for China to US cloud technology to build AI.
  • The Secretary of Commerce is acting to block use of US tech for AI by China due to “security concerns.”
  • The move, impacting players like Amazon and Microsoft, is anticipated to escalate tech tensions with China.

The long-standing rivalry between the US and China has evolved into many facades over the last decade. The intensifying competition underscores economic supremacy and national security concerns, shaping the dynamics of a burgeoning tech war. Last year, the battleground extended into the development of AI, but this year, the US has indicated the desire to control and dominate local cloud computing services. 

Recent proposals suggest stringent measures to curb China’s access to US cloud computing firms, fueled by concerns over the potential exploitation of American technology for AI advancement. In a recent interview, US Secretary of Commerce Gina Raimondo emphasized the need to prevent non-state actors and China from utilizing American cloud infrastructure to train their AI models.

“We’re beginning the process of requiring US cloud companies to tell us every time a non-US entity uses their cloud to train a large language model,” Raimondo said at an event on January 27. Raimondo, however, did not name any countries or firms about which she was particularly concerned. Still, the maneuver is anticipated to intensify the technological trade war between the US and China, and signify a notable step toward the politicization of cloud provision.

The focal point of this battle lies in recognizing that controlling access to cloud computing is equivalent to safeguarding national interests. Raimondo parallels the control exerted through export restrictions on chips, which are integral to American cloud data centers. As the US strives to maintain technological supremacy, closing avenues for potential malicious activity becomes imperative.

Therefore, the proposal mandates explicitly firms like Amazon and Google to gather, store, and scrutinize customer data, resembling the weight of stringent “know-your-customer” regulations akin to those shaping the financial sector. Conversely, China has been aggressively pursuing AI development, seeking to establish itself as a global leader in the field. 

The US concerns stem from the dual-use nature of AI technologies, which can have both civilian and military applications. The fear is that China’s advancements in AI could potentially be leveraged for strategic military purposes, posing a direct challenge to US national security.

Of AI, cloud computing, and the US-China tech war

China's Premier Li Qiang (R) speaks with US Commerce Secretary Gina Raimondo during their meeting at the Great Hall of the People in Beijing on August 29, 2023. (Photo by Andy Wong/POOL/AFP).

China’s Premier Li Qiang (R) speaks with US Commerce Secretary Gina Raimondo during their meeting at the Great Hall of the People in Beijing on August 29, 2023. (Photo by Andy Wong/POOL/AFP).

Although the US broadened chip controls in October, focusing on Chinese firms in 40+ nations, a gap remains. That is why it is paramount for the US to address how Chinese companies can still leverage chip capabilities through the cloud. Cloud technology has become the backbone of modern businesses and governments, making it a critical asset in the ongoing tech war. 

From start to finish, cloud computing is inherently political, Trey Herr, director of cyber statecraft at the Atlantic Council, told Raconteur. He said that its reliance on extensive physical infrastructure tied to specific jurisdictions makes it susceptible to local politics, adding that conversations about cloud security inevitably take on political dimensions.

In October 2023, Biden mandated the US Department of Commerce mandate disclosures, aiming to uncover foreign actors deploying AI for cyber-mischief. Now, the Commerce Department, building on stringent semiconductor restrictions for China, is exploring the idea of regulating the cloud through export controls. Raimondo said the concern is that Chinese firms could gain computing power via cloud giants like Amazon, Microsoft, and Google.

“We want to make sure we shut down every avenue that the Chinese could have to get access to our models or to train their models,” she said in an interview with Bloomberg last month. In short, China’s strides in AI and cutting-edge technologies are a paramount worry for the administration. After all, despite Washington’s efforts to curtail China’s progress through chip export restrictions and sanctions on Chinese firms, the nation’s tech giants resiliently achieve substantial breakthroughs, challenging the effectiveness of US constraints.

Nevertheless, regulating such activities in the US is still being determined because cloud services, which do not involve physical goods transfer, fall outside export control domains. Thea Kendler, assistant secretary for export administration, mentioned the potential need for additional authority in this space during discussions with lawmakers last month.

Addressing further loopholes, the Commerce Department also plans to conduct surveys on companies developing large language models for their safety tests, as mentioned by Raimondo on Friday. However, specific details about the survey requests were not disclosed.

What are cloud players saying?

As with previous export controls, US cloud providers fear that limitations on their interactions with international customers, lacking reciprocal measures from allied nations, may put American firms at a disadvantage. However, Raimondo said that comments on the proposed rule are welcome until April 29 as the US seeks input before finalizing the regulation.

What is certain is that the cloud will persist as an arena for trade war extensions and geopolitical maneuvers. Nevertheless, this tech war has broader implications for the global tech ecosystem. It prompts questions about data sovereignty, privacy, and the geopolitical alignment of technological alliances. As the US seeks to tighten its grip on the flow of technology, China is compelled to find alternative routes to sustain its AI ambitions.

The outcome will shape the future trajectory of technological innovation, with ramifications extending far beyond cloud computing and AI development. 

The post Biden weighs blocking China’s access to US cloud tech, fearing AI advancement appeared first on TechHQ.

]]>
Tech job layoffs fuel AI recruitment drive https://techhq.com/2024/01/ai-jobs-replace-other-tech-workers/ Tue, 30 Jan 2024 12:30:03 +0000 https://techhq.com/?p=231726

AI jobs replace other tech specialisms. Layoffs part of restructured workforces. Big tech goes all-in on LLMs and chatbots. Technology and tech-first companies around the world are continuing to lay off staff. According to Layoffs.fyi, 93 companies have let nearly 25,000 employees go in the first month of 2024. Big-name technology companies cutting numbers include... Read more »

The post Tech job layoffs fuel AI recruitment drive appeared first on TechHQ.

]]>
  • AI jobs replace other tech specialisms.
  • Layoffs part of restructured workforces.
  • Big tech goes all-in on LLMs and chatbots.

Technology and tech-first companies around the world are continuing to lay off staff. According to Layoffs.fyi, 93 companies have let nearly 25,000 employees go in the first month of 2024.

Big-name technology companies cutting numbers include SAP (8,000), Salesforce (700), Paytm (1,000), and Spotify (1,500), among hundreds of others. CNBC has reported that big tech is restructuring its workforces to further their designs on the AI market, and using the exercises to lay off workers from less profitable ventures.

Despite many thousands of layoffs in tech, AI and ML are massive growth sectors in the technology industry at present as companies race to capitalize on the public’s seemingly undimmed enthusiasm for smart chatbots, refined search, and automated data mining.

Illustration for news article about AI jobs replace other tech workers.

“Public Domain: WPA: Depression-Era Unemployed, 1935 by Unknown (NARA)” by pingnews.com is marked with Public Domain Mark 1.0.

Household names Apple and Nvidia both have multiple live advertisements for suitably qualified machine learning experts, with good candidates offered salaries well into six figures. For recent graduates with even a smattering of data science in their undergraduate portfolio, it’s a great time to enter the job market. For more seasoned professionals well-versed in Python, R, data science, and industry-standard tools like TensorFlow, it’s a buyer’s market.

Perhaps the move most indicative of the current state of play in the technology space is Meta’s restructuring. Here, AI jobs replace other roles in what’s becoming an oft-repeated trope. The company has cut its Metaverse function in terms of funding and people, and instead is plowing saved resources into AI-focused projects. Layoffs in specifically targeted areas like AR/VR come alongside active recruitment of new AI specialists. This suggests a hurry to get qualified personnel on board and up and running quickly: if the pace of progress were slower, it might have deployed resources to cross-train staff already on its books.

Unfortunately for those confronted with cardboard boxes containing the contents of their desks, the company is pursuing the quicker burn-down-and-start-again approach to restructuring.

Pruning in other areas

Google has reduced its headcount in areas like Fitbit and Google Assistant in a series of cuts that began last year, following the AI-before-all-else mindset that’s gripped the industry. According to a leaked memo seen by The Verge, Google CEO Sundar Pichai said that Alphabet, Google’s parent company, was “removing layers to simplify execution and drive velocity in some areas.” Layer removal, in this case, amounted to 12,000 redundancies, and velocity can be translated from corporate-speak into English as ‘profits.’

Scaremongers and reactionaries in all parts of the media are fond of stating that ‘AI will take our jobs.’ Tangentially, at least, they are partly correct; AI is taking jobs but taking them in areas of technology companies not directly related to AI.

AI replaces jobs illustrative Mastodon post.

Sorry, your layer has been removed to create velocity. Have a great day, y’all… Source: fosstodon.org

Meanwhile, the practical use of AI in organizations tends to be centered around creating greater efficiency and faster throughput of work rather than replacing human labor. Software developers using AI, for example, can typically reduce the amount of time they spend researching knotty coding problems at sites like StackOverflow and find answers from LLMs faster, either through querying chatbots for better web search results or directly using plugin tools like Copilot or CodeWhisperer.

AI jobs replace game developers

In the computer game sector, Microsoft and Riot Games are ridding themselves of 8% and 11% of their gaming staff respectively, with Microsoft’s move as a result of consolidation in the wake of its $69bn purchase of Blizzard.

As the games industry contracts its outlays, Amazon’s Twitch service, beloved by gamers and de facto host of live gaming streams, is to lose 500 staff. Elsewhere in the Amazon empire, Audible, the audiobook subscription service, is also set to lose 5% of its workforce. Audible CEO Bob Carrigan said the company faces an “increasingly challenging landscape.”

The post Tech job layoffs fuel AI recruitment drive appeared first on TechHQ.

]]>
OpenAI’s in-house chip odyssey: Sam Altman aims for a network of fabs https://techhq.com/2024/01/openais-in-house-chip-odyssey-sam-altman-aims-for-a-network-of-fabs/ Wed, 24 Jan 2024 15:00:45 +0000 https://techhq.com/?p=231393

Sam Altman, CEO of OpenAI, has been wooing investors like G42 and SoftBank, for chip fabs capital. His urgency stems from the expected chip supply shortage by the decade’s end. Insiders have revealed that the planned network aims to partner with top-tier chip manufacturers and will have a worldwide reach. In a thought-provoking revelation during The... Read more »

The post OpenAI’s in-house chip odyssey: Sam Altman aims for a network of fabs appeared first on TechHQ.

]]>
  • Sam Altman, CEO of OpenAI, has been wooing investors like G42 and SoftBank, for chip fabs capital.
  • His urgency stems from the expected chip supply shortage by the decade’s end.
  • Insiders have revealed that the planned network aims to partner with top-tier chip manufacturers and will have a worldwide reach.

In a thought-provoking revelation during The Wall Street Journal’s Tech Live event on October 2023, Sam Altman said he would “never rule out” that OpenAI could end up crafting its own AI chips. While acknowledging that OpenAI is not currently developing proprietary chips, Altman hinted that realizing the grand vision of attaining general AI might necessitate the company venturing into chip creation. Many saw it as a dynamic stance, underlining OpenAI’s adaptability and commitment to pushing boundaries in the ever-evolving landscape of AI. Additionally, if OpenAI were to, for instance own the patent on the chips that made general AI possible, it would essentially own the future of the world.

However, Altman has long emphasized the importance of developing specialized hardware to meet the unique demands of AI. Project Tigris emerges from this vision: to craft a dedicated chip tailored to optimize the processing requirements of OpenAI’s advanced AI models. 

What is the significance of in-house chips for OpenAI?

Developing an in-house chip promises to significantly improve the performance and efficiency of OpenAI’s AI models. By customizing hardware to align with the specific needs of advanced machine learning algorithms, OpenAI aims to push the boundaries of what AI can achieve, potentially unlocking new possibilities in fields ranging from natural language processing to computer vision.

At the heart of Altman’s intention is to keep OpenAI from being thrown off course by the seemingly simple obstacle of microchip shortages. The scarcity of these vital components, crucial for the advancement of AI, has already become a colossal headache for Altman and numerous tech executives striving to replicate OpenAI’s triumphs.

Project Tigris emerges from this vision to craft a dedicated chip tailored to optimize the processing requirements of OpenAI’s advanced AI models. Altman has repeatedly emphasized that the existing chip supply needs to meet OpenAI’s insatiable requirements. 

But Altman’s endeavors faced a temporary hiatus when he was briefly removed as OpenAI CEO in November, 2023. However, soon after his return, the project was reignited. Altman has even explored the possibility with Microsoft, and sources reveal the software giant’s keen interest in the venture.

What has Sam Altman planned for OpenAI now?

The latest development is that Altman has discreetly initiated conversations with potential investors, aiming to secure substantial funds not just for AI chips but for creating whole chip-fabrication plants, affectionately known as fabs. Veiled in anonymity, sources disclosed that among the companies engaged in these discussions was G42 from Abu Dhabi – a revelation by Bloomberg last month – and the influential SoftBank Group.

“The startup has discussed raising between US$8 billion and US$10 billion from G42,” said one of Bloomberg‘s anonymous sources on the story. “It’s unclear whether the chip venture and wider company funding efforts are related,” the report reads. Unbeknown to many, this fab project entails collaboration with top chip manufacturers to use the expertise of established industry players, ensuring that Project Tigris benefits from the latest advancements in semiconductor technology.

While Bloomberg previously hinted at fundraising efforts for the chip venture, the accurate scale and manufacturing focus have yet to be unveiled. Still in their early stages, these talks have not yet finalized the list of participating partners and backers, adding a layer of intrige to this evolving narrative. 

Is OpenAI’s venture into building its chip fabs a viable endeavor?

Altman courting Korean expertise and money? Source: X.com.

Altman courting Korean expertise and money? Source: X.com.

Ultimately, Altman advocates for urgent industry action to ensure an ample chip supply by the end of the decade. However, his approach, emphasizing the construction and maintenance of fabs, diverges from the cost-effective strategy favored by many AI industry peers, including Amazon, Google, and Microsoft—OpenAI’s primary investor. 

These tech giants typically design custom silicon and outsource manufacturing to external suppliers. The construction of a cutting-edge fab involves a significant financial investment, often reaching tens of billions of dollars, and establishing a network of such facilities spans several years. A single chip factory’s cost can range from US$10 billion to US$20 billion, influenced by factors such as location and planned capacity. 

For instance, Intel’s Arizona fabs are estimated at US$15 billion each, and TSMC’s nearby factory project is projected to reach around US$40 billion. Moreover, these facilities may require four to five years for completion, with potential delays due to current workforce shortages. Some argue that OpenAI seems more inclined to support leading-edge chip manufacturers like TSMC, Samsung Electronics, and potentially Intel rather than enter the foundry industry. 

In an article in The Register, it’s suggested that the strategy could involve channeling raised funds into these fabrication giants, such as TSMC, where Nvidia, AMD, and Intel’s GPUs and AI accelerators are manufactured. TSMC stands out as a prime candidate, given its role in producing components for significant players in the AI industry. 

“If he gets it done—by raising money from the Middle East or SoftBank or whoever—that will represent a tech project that may be more ambitious (or foolhardy) than OpenAI itself,” Cory Weinberg said in his briefing for The Information.

While the ambition behind Project Tigris is commendable, inherent challenges and risks are associated with developing custom hardware. The intricacies of semiconductor design, production scalability, and compatibility with existing infrastructure pose formidable hurdles that OpenAI will need to overcome to realize the full potential of its in-house chip.

The post OpenAI’s in-house chip odyssey: Sam Altman aims for a network of fabs appeared first on TechHQ.

]]>
Google’s first data center in the UK: a billion-dollar tech investment https://techhq.com/2024/01/google-billion-dollar-uk-data-center-unveiled/ Mon, 22 Jan 2024 15:00:00 +0000 https://techhq.com/?p=231319

The data center will be the first to be operated by Google in the UK. Google’s 2022 deal with ENGIE adds 100MW wind energy. The aim is for 90% carbon-free UK operations by 2025. In the ever-evolving landscape of cloud computing, Google Cloud is a formidable player, shaping the global data center market with its... Read more »

The post Google’s first data center in the UK: a billion-dollar tech investment appeared first on TechHQ.

]]>
  • The data center will be the first to be operated by Google in the UK.
  • Google’s 2022 deal with ENGIE adds 100MW wind energy.
  • The aim is for 90% carbon-free UK operations by 2025.

In the ever-evolving landscape of cloud computing, Google Cloud is a formidable player, shaping the global data center market with its leading solutions and heavyweight presence. Google Cloud’s commitment to expanding its global footprint is exemplified by its recent announcement of a US$1 billion investment in a new data center in Waltham Cross, Hertfordshire, UK. 

The move not only underscores the company’s dedication to meeting the needs of its European customer base, but also aligns with the UK government’s vision of fostering technological leadership on the global stage. As it is, one of the critical pillars of Google Cloud’s presence in the UK is its substantial investment in cutting-edge data infrastructure. That said, the upcoming data center would be Google’s first in the country.

Illustration of Google's new UK data Centre in Waltham Cross, Hertfordshire. The 33-acre site will create construction and technical jobs for the local community. Source: Google

Illustration of Google’s new UK data Centre in Waltham Cross, Hertfordshire. Source: Google.

“As more individuals embrace the opportunities of the digital economy and AI-driven technologies enhance productivity, creativity, health, and scientific advancements, investing in the necessary technical infrastructure becomes crucial,” Debbie Weinstein, VP of Google and managing director of Google UK & Ireland, said in a statement last week.

In short, this investment will provide vital computing capacity, supporting AI innovation and ensuring dependable digital services for Google Cloud customers and users in the UK and beyond.

Google already operates data centers in various European locations, including the Netherlands, Denmark, Finland, Belgium, and Ireland, where its European headquarters are situated. The company already has a workforce of over 7,000 people in Britain.

Google Cloud’s impact extends far beyond physical infrastructure, though. The company’s cloud services have become integral to businesses across various sectors in the UK. From startups to enterprises, organizations are using Google Cloud’s scalable and flexible solutions to drive efficiency, enhance collaboration, and accelerate innovation

The comprehensive nature of Google Cloud’s offerings, including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS), ensures that it caters to the diverse needs of the UK’s business landscape.

That said, the investment in Google’s Waltham Cross data center is part of the company’s ongoing commitment to the UK. It follows other significant assets, such as the US$1 billion acquisition of a Central Saint Giles office in 2022, a development in King’s Cross, and the launch of the Accessibility Discovery Centre, fostering accessible tech across the UK.

“Looking beyond our office spaces, we’re connecting nations through projects like the Grace Hopper subsea cable, linking the UK with the United States and Spain,” Weinstein noted.

“In 2021, we expanded the Google Digital Garage training program with a new AI-focused curriculum, ensuring more Brits can harness the opportunities presented by this transformative technology,” Weinstein concluded. 

Google is investing US$1 billion in a new UK data center to meet rising service demand, supporting Prime Minister Rishi Sunak's tech leadership ambitions. Source: Google.

Google is investing US$1 billion in a new UK data center to meet rising service demand, supporting Prime Minister Rishi Sunak’s tech leadership ambitions. Source: Google.

24/7 Carbon-free energy by 2030

Google Cloud’s commitment to sustainability also aligns seamlessly with the UK’s environmental goals. The company has been at the forefront of implementing green practices in its data centers, emphasizing energy efficiency and carbon neutrality. “As a pioneer in computing infrastructure, Google’s data centers are some of the most efficient in the world. We’ve set out our ambitious goal to run all of our data centers and campuses on carbon-free energy (CFE), every hour of every day by 2030,” it said.

This aligns with the UK’s ambitious targets to reduce carbon emissions, creating a synergy beyond technological innovation. Google forged a partnership with ENGIE for offshore wind energy from the Moray West wind farm in Scotland, adding 100 MW to the grid and propelling its UK operations towards 90% carbon-free energy by 2025. 

Beyond that, the tech giant said it is delving into groundbreaking solutions, exploring the potential of harnessing data center heat for off-site recovery and benefiting local communities by sharing warmth with nearby homes and businesses.

The post Google’s first data center in the UK: a billion-dollar tech investment appeared first on TechHQ.

]]>