Artificial Intelligence - TechHQ Technology and business Mon, 25 Mar 2024 17:17:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 How Bulk Data Centers is keeping its cool with soaring AI data demands https://techhq.com/2024/03/how-bulk-data-centers-is-keeping-its-cool-with-soaring-ai-data-demands/ Mon, 25 Mar 2024 14:06:19 +0000 https://techhq.com/?p=232592

Research by the IDC indicates that the growing use of AI technology will require storage capacity in data centers to reach 21.0 zettabytes by 2027. More storage necessitates more energy, and, as a result, data centers are trying to manage growing customer workloads while pre-empting future technological advancements that will further increase infrastructure requirements. In... Read more »

The post How Bulk Data Centers is keeping its cool with soaring AI data demands appeared first on TechHQ.

]]>

Research by the IDC indicates that the growing use of AI technology will require storage capacity in data centers to reach 21.0 zettabytes by 2027. More storage necessitates more energy, and, as a result, data centers are trying to manage growing customer workloads while pre-empting future technological advancements that will further increase infrastructure requirements.

Data centers AI

Source: Bulk Data Centers

In addition, climate change will continue to impact businesses and communities worldwide, partly due to rising energy demands. Therefore, it is imperative that the environmental impact of the AI surge is addressed, for example, by implementing state-of-the-art cooling technology and optimizing data center site locations.

Bulk Data Centers, a builder and operator of Nordic data centers, is an example of a company taking steps to address growing energy demands sustainably. TechHQ spoke with Rob Elder, the company’s Chief Commercial Officer (CCO), to find out more about the innovative strategies and technologies implemented to achieve this goal.

Increasing rack density over horizontal expansion

In mid-2023, Bulk Data Centers invested heavily in USystems’ CL20 Active Rear Door Heat Exchangers to support new, power-dense technologies needed to meet customer demands. These include GPU-based hardware, which requires up to 50 kW of power per rack.

Mr Elder told TechHQ: “Customers were asking for density that was beyond the capabilities of traditional systems, plus they wanted more flexibility to ramp up density when needed. This was driven by the ever-growing power of the GPUs and CPUs that customers use.”

Increasing the rack density is a more environmentally friendly way of accommodating larger workloads than horizontal expansion because cooling tends to be more efficient at higher temperatures.

Data centers AI

Source: Bulk Data Centers

Mr Elder said: “Some operators try to spread out high-density workloads because they don’t have the cooling systems to accommodate it, but the problem with that is you use more materials to distribute pipework and cables over a longer distance. You also need more real estate because you need a bigger building.

“So actually, by increasing the density, you benefit from smaller buildings, and you densify the infrastructure. All of that further reduces the impact of operations.”

Choosing a site to lower cooling requirements

Bulk Data Centers puts a particular focus on sustainability when selecting sites for expansion. Its facilities are in the Nordics, which benefit from year-round low temperatures and provide natural cooling to racks. “You can increase the number of hours that you’re not running mechanical cooling, which reduces the reliance on electricity,” said Mr Elder.

Electricity in these countries carries a smaller environmental footprint compared to others due to their abundant renewable energy resources. For instance, in Norway, where Bulk has several data center facilities, electricity is sourced from renewable energy.

Mr Elder added: “We’re also assessing supply chain components for environmental impact via procurement, including equipment suppliers and using materials like green steel in construction.”

Employing energy-efficient rear door coolers

While Norway is a naturally cool place, it is not enough to rely entirely on direct air cooling, and Mr Elder said that Bulk Data Centers has “always recognized the limitations of conventional cooling.

USystems’ CL20 Active Rear Door Heat Exchanger units reclaim 15 percent of computing power compared to traditional cooling and save over 50,000 trees-worth of carbon per 1 MW deployment. With adaptive intelligence controlling Bulk Data Centers’ room temperatures, the system minimized risks associated with high-density computing and provided predictable air management. Therefore, it aligned seamlessly with Norway’s energy-efficient ethos and low-carbon hydro energy at Bulk’s N01 campus in Kristiansand, Southern Norway.

“We could offer a peak density beyond what the average was because each door has its own capacity, capturing energy directly from the back of each rack,” said Mr Elder. “Our first deployment with a customer gave an average of 40 kW of rack, but a peak of up to 60 kW.”

Data centers AI

Source: Bulk Data Centers

Boosting the peak density allowed Bulk Data Centers to be more flexible in accommodating customers’ power demands. “Having a standard design when we don’t know exactly what our customers are going to be deploying means we’ve got that flexibility and can still meet that short timeframe they require,” he said. “That’s why we’ve continued to develop our relationship with USystems because flexibility and speed are important to customers.”

What’s next for data center operators?

Looking to the future, the CCO emphasized how important it is for data center operators to stay on their toes when it comes to adaptation and innovation. “We’re at the beginning of what seems to be a dramatic shift.”

“Designs will keep evolving, reaching for higher density and necessitating a blend of water-cooled, direct-to-chip, and air-cooled systems.”

Despite growth, the environmental impact must also be considered. “With the massive IT loads that we’re witnessing and the escalating power demands, there’s a growing need for awareness regarding environmental impact.”

As power demands rise and environmental considerations become increasingly crucial, Legrand’s solutions stand ready to future-proof data center infrastructure while minimizing its carbon footprint. Discover how the customizable solutions, endorsed by industry leaders and equipped with adaptive intelligence, could revolutionize your operations by visiting the Legrand website today.

The post How Bulk Data Centers is keeping its cool with soaring AI data demands appeared first on TechHQ.

]]>
Tiny VLMs bring AI text plus image vision to the edge https://techhq.com/2024/03/tiny-vlms-bring-ai-text-plus-image-vision-to-the-edge/ Thu, 07 Mar 2024 12:02:04 +0000 https://techhq.com/?p=232534

Large language models capable of providing statistically likely answers to written text prompts are transforming knowledge work. AI algorithms enable tasks to be performed faster and reduce the need for specialist skills, which can be unnerving to witness for highly trained and experienced staff. But there’s magic to how neural networks can level the playing... Read more »

The post Tiny VLMs bring AI text plus image vision to the edge appeared first on TechHQ.

]]>

Large language models capable of providing statistically likely answers to written text prompts are transforming knowledge work. AI algorithms enable tasks to be performed faster and reduce the need for specialist skills, which can be unnerving to witness for highly trained and experienced staff. But there’s magic to how neural networks can level the playing field and help users to understand their world. And a great example of this is the rise of vision language models (VLMs), which co-locate image features alongside multi-dimensional word embeddings.

So-called multimodal AI – neural networks capable of generating results based on multiple input types such as text and images – adds to the number of ways that deep learning can be used. OpenAI’s GPT-4 with Vision will answer questions about images – for example, if the user submits a photo or web link. Given a picture of the contents of a fridge, it can suggest ideas for dinner.

You can picture VLMs as image recognition on steroids. Multimodel AI models can do much more than just recognize features in a scene. Algorithms can gather insights from security footage, prompt maintenance crews to take action, analyse customer behaviour, and much more besides (as our test results show – scroll down to see the responses).

VLMs can provide a second pair of eyes that never get tired. However, these superpowers come at a cost and historically have required heavy lifting to happen behind the scenes in the cloud. Generative AI algorithms have gorged themselves on the contents of the internet in the name of fine-tuning their model weights.

But not all of those parameters are created equal, and developers are finding ways to trim AI models down to a more memory-efficient size for specific tasks. It means that algorithms can run locally at the edge and suit fully remote use cases such as drone inspections – for example, where connectivity cannot be guaranteed.

One of a new breed of tiny VLMs capable of running at the edge is Moondream, which is advertised as being able to run anywhere, even on mobile devices and Raspberry Pi’s. The code to run the tiny AI text and vision model is available on GitHub, and there’s even a live demo environment so that you can jump straight in and put the VLM to the test.

Tiny VLM test results

To discover what VLMs can do and highlight potential use cases for local AI models capable of answering questions about images, TechHQ submitted a variety of photos to the Moondream2 live demo hosted on Huggin Face.

Image 1

Harbour scene for testing AI vision language model

Harbour scene. Photo: JT.

Input: Is it safe to walk straight ahead?

Response: No, it is not safe to walk straight ahead. There is a dock with a red life ring attached to it, and a few boats are visible in the water. You need to be cautious and look out for any obstacles or hazards such as the dock, boats, or other watercraft, before walking forward.

Image 2

Stairway for testing AI vision language model

Stairway. Photo: JT.

Input: Is there something for me to hold on to?

Response: Yes, there is a railing on the stairs, and you can hold on to it while climbing to the top. The railing is attached to the wall, providing support for those ascending or descending the steps.

Input: Can you see an open window?

Response: Yes, there is an open window on the building in the background, and the sun is shining brightly illuminating the scene.

Image 3

Pedestrian crossing for testing AI vision language model

Pedestrian crossing. Photo: JT.

Input: Is it safe for me to use the pedestrian crossing?

Response: No, it is not safe for you to use the pedestrian crossing, as the image shows a black car approaching the crossing. The car is likely not paying attention to the red light, which indicates that it’s not safe for pedestrians to cross at that moment.

Image 4

Road signs for testing AI vision language model

Road signs. Photo: JT.

Input: Should I go left or right to get to the airport?

Response: Right.

Input: Which way to the marina?

Response: Left.

Image 5

Bus stop by the side of the road for testing AI vision language model

Bus stop. Photo: JT.

Input: Is there anywhere to catch a bus?

Response: Yes, there is a bus stop in the image, with benches for passengers to sit on and a covered area. A traffic cone is also present, and a bridge can be seen in the background.

Input: Where can I shelter if it rains?

Response: You can shelter at the bus stop if it rains, as there is a bus shelter nearby with benches and a roof providing protection from the elements.

Image 6

Image of a luxury yacht for testing AI vision language model

Luxury yacht. Photo: JT.

Input: What can you see?

Response: A helicopter is positioned on a ship, with a metal fence in the foreground. The background features a variety of elements, including buildings, trees, a road, a water body, and the sky.

Image 7

Photo of New Cut Coffee shop in Bristol, UK, for testing AI vision language model.

Coffee shop. Photo: JT.

Input: Is there anywhere to get a warm drink around here?

Response: Yes, there is a coffee shop nearby. The image shows a red building with the words “New Cut Coffee” on it, and a black fence. A black car is parked in the lot, and a white building is visible in the background. The sky is blue overhead.

Input: Does the coffee shop sell sandwiches?

Response: Yes.

Input: What else is there to eat at the coffee shop?

Response: In addition to coffee, the New Cut Coffee shop offers sandwiches and pastries. The image also features a building and a vehicle in the background.

Image 8

Web cam image for testing AI vision language model.

Web cam mode: the tiny VLM live demo can accept images via a connected webcam if browser permissions are given. Screengrab: JT.

Input: Can you see any ethernet ports?

Response: Yes, there are two visible ports on the device that the man is holding.

Verdict on tiny VLM testing

It’s hard not to be impressed with the performance of Moondream2 – a VLM that’s initialized with weights from SigLIP and Phi 1.5, and then trained using synthetic data generated by Mixtral.

Evaluating the responses generated from TechHQ’s test sequence of images and text inputs, it’s clear that Moondream2 gets more right than it gets wrong, and is capable of noticing fine details that weary human eyes may miss. Also, as mentioned, the AI model’s small footprint is another winning feature – paving the way for inference to take place on edge devices that could include drones or robots.

VLMs could turn out to be incredibly useful in helping robots orientate themselves in the world and being able to run algorithms locally eliminates the need for network access, or at least provides redundancy in the case of outages.

A smart idea for smart glasses

Alongside its launch of the Quest 3 VR headset, Meta also announced a collaboration with Ray-Ban to produce an updated line of smart glasses. The product, which features Qualcomm’s weight-distributed Snapdragon AR platform, has some local voice control capabilities – for example, to instruct the spectacles to take images using the built-in ultra-wide 12MP camera. Users simply have to say, “Hey Meta, take a picture.”

Being able to couple those photos with a VLM running locally would turn the Ray-Ban smart glasses into a much more compelling product, and could bring scenes to life for visually impaired wearers without the need for pairing with a smartphone.

Vision assistance powered by edge-compatible VLMs could dramatically enhance the capabilities of digital camera-equipped devices. As the webcam image in our test sequence highlights, there’s the potential for algorithms to help with maintenance and repair tasks, distributing knowledge and expertise across the globe.

AI is doing for knowledge work what robotics has done for manufacturing, and it’s just the beginning.

 

The post Tiny VLMs bring AI text plus image vision to the edge appeared first on TechHQ.

]]>
Inkitt: what happens when AI eats its own words? https://techhq.com/2024/03/ai-will-help-writers-create-literally-average-stories/ Mon, 04 Mar 2024 09:30:39 +0000 https://techhq.com/?p=232469

Inkitt AI help for writers shows successful patterns. Success delivered by what are proven to be winning formulae. We look forward to Fast & Furious 52‘s release in 2066. The latest $37m funding round for the self-publishing platform Inkitt was awarded at least in part due to its intention to use large language models that... Read more »

The post Inkitt: what happens when AI eats its own words? appeared first on TechHQ.

]]>
  • Inkitt AI help for writers shows successful patterns.
  • Success delivered by what are proven to be winning formulae.
  • We look forward to Fast & Furious 52‘s release in 2066.

The latest $37m funding round for the self-publishing platform Inkitt was awarded at least in part due to its intention to use large language models that work on behalf of its authors. The AI will guide submissions to the eponymous app in areas such as readability, plot, and characterization.

Self-publishing is hugely popular among authors. It circumvents the often-frustrating processes of finding an agent, receiving rejections from established publishing houses, and lessening any income from a work thanks to parties in the chain who each take a cut of revenues generated by sales. An AI-powered virtual assistant can help authors with advice and offer changes to a text that are drawn from previously successful stories.

Inkitt’s AI amalgamates the output from several large language models to find trends in the enormous body of previously published books, giving writers help to align their work with already successful and popular works. At first sight, its approach is clearly more appropriate than having ‘authors’ simply use an AI to create words for a book. It’s also a step above once-respected news outlets using AI to write stories. But a deeper understanding of how large language models work informs us that the boundaries of creativity possible with AI are claustrophobic.

AI help for writers illustration

“Cuba book” by @Doug88888 is licensed under CC BY-NC-SA 2.0.

Whether in video, visual art, game design, or text, machine learning algorithms are educated on extant publications. Over the period of the learning phase, they process large quantities of data, and learn patterns that can then be used to reproduce material similar to that in the body of learning data.

In the case of a novel or screenplay’s structure, then, what’s succeeded in the past (in terms of popularity and, often, revenue generated) can be teased out from the also-rans. It’s a process that is as old as creativity itself, albeit a habit that’s formed without digital algorithmic help. Hollywood industry experts can produce lists of formulae that work for the plot, the rhythm of narrative flow, characterization, and so on. Such lists, whether ephemeral or real, inform the commissioning and acceptance of new works that will have the best chance to succeed.

The threat to creativity from the models used in ways like that proposed by Inkitt is twofold. The most obvious is one of the repetition of successful formulae. This means, depending on your choice of language, works that are on-trend, derivative, zeitgeisty, or repetitious.

The second threat comes from the probability curves embedded into the AI code. The degree of exception from the average of any creative work chewed up by an algorithm will always be diminished. What can’t be judged particularly easily is what makes something an exception and whether it’s different from the average because it’s badly created or because it’s superbly created. Truly fantastic creative works may be given a lesser weight because they don’t conform to a number of other factors, like sentence length or a color palette that is (currently) familiar.

The effect is one of standardization and averaging across the gamut of creative output so that a product is successfully conformist to the mean. Standardization equals conforming, which equals success. But standardization leads inexorably to stagnation.

In practical uses of AI today, many of the traits and methods of models are perfect for their designed purpose. Data analytics of spending patterns informs vendors’ choices for new product development based on what sells well. Outliers and exceptions have little importance and are rightly ignored by the model’s probability curve.

But in areas of creating new art, product design, music composition, or text creation, the exceptions can have value, a value that is increased by not conforming to average patterns of success, readability, aesthetic attractiveness, characterization, or one of a thousand other variables at play. If conformity to guidelines means success, then how we define success is the interesting question. History is littered with composers, artists and writers who didn’t conform, and were succesful during their lifetimes or posthumously. Plenty too who were succesful conformists. And many who kicked against prevailing strictures and got nowhere, dying in poverty.

Will AI be able to give help to writers?

“book” by VV Nincic is licensed under CC BY 2.0.

So what help can AI actually deliver for writers? As in many areas of life and business, it can work well as a tool, but it cannot – or at least should not – be allowed to dictate the creative elements of art.

By reducing creativity to an algorithmically generated idea of “what works,” talent that’s non-conforming is immediately stymied. It depends, of course, on what the creator’s desired outcome is, or how they deem themselves to be succeful. If they want a greater chance of achieving mainstream popularity, then the Inkitt AI will help guide them in what to change to better fit into the milieu. Many dream of being the scriptwriter or 3D visual designer for the next movie blockbuster, and there is value in that. Inkitt may make people better writers, but it’s the individual’s idea of what a ‘better’ writer is that will inform their decision whether or not to sign up.

Individual human voices can make great creative works. But by placing those works inside a mass of mediocrity (and worse) and teaching an algorithm to imitate the mean, what’s produced is only ever, at best, going to be slightly better than average. As more content is created by AI and it too becomes part of the learning corpora of machine learning algorithms, AIs will become self-replicating, but not in the manner of dystopian sci-fi. Much of the future’s published content will just be very, very dull.

Oatmeal for breakfast, lunch, and dinner.

Amalgamating for the sellable mean turns tears of human creativity into nothing more than raindrops in the flood.

The post Inkitt: what happens when AI eats its own words? appeared first on TechHQ.

]]>
US aims for chip supremacy: From zero to 20% by 2030 https://techhq.com/2024/02/us-aims-for-chip-supremacy-from-zero-to-20-by-2030/ Wed, 28 Feb 2024 15:30:09 +0000 https://techhq.com/?p=232382

The US wants to regain its leadership within the chip industry, and Commerce Sec. Raimondo targets 20% domestic production of leading-edge chips by 2030. The US currently produces none; hence, the ambitious goal is set for the end of this decade. Biden admin aims to bring memory chip production to the US “at scale.” As... Read more »

The post US aims for chip supremacy: From zero to 20% by 2030 appeared first on TechHQ.

]]>
  • The US wants to regain its leadership within the chip industry, and Commerce Sec. Raimondo targets 20% domestic production of leading-edge chips by 2030.
  • The US currently produces none; hence, the ambitious goal is set for the end of this decade.
  • Biden admin aims to bring memory chip production to the US “at scale.”

As the global demand for semiconductors surges, the US has embarked on a bold mission to revitalize its chip manufacturing industry. Last February, the Commerce Department launched the CHIPS for America program, echoing the ambitious spirit of the space race era. While US companies lead in AI development, the absence of domestic chip production poses a critical challenge. However, with a strategic focus on talent development, R&D, and manufacturing, the US aims to fill this gap and produce 20% of the world’s leading-edge chips by 2030. 

Commerce Secretary Gina Raimondo remains optimistic about the program’s potential to transform America’s industrial landscape. The US aims to fortify its supply chains and reduce reliance on geopolitical rivals by investing in leading-edge logic chip manufacturing and onshoring memory production. “Our investments in leading-edge logic chip manufacturing will put this country on track to produce roughly 20% of the world’s leading-edge logic chips by the end of the decade,” Commerce Secretary Gina Raimondo said during a speech at the Center for Strategic and International Studies (CSIS) on February 26, 2024.

“That’s a big deal,” Raimondo added. “Why is that a big deal? Because folks, today we’re at zero.” Her speech came a year after the initiation of funding applications under the 2022 CHIPS and Science Act by the US Department of Commerce. With a staggering US$39 billion earmarked for manufacturing incentives, the stage has been set for a transformative journey in the semiconductor landscape. 

US Commerce Secretary Gina Raimondo speaks during the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park, in central England, on November 1, 2023. (Photo by TOBY MELVILLE/POOL/AFP).

US Commerce Secretary Gina Raimondo speaks during the UK Artificial Intelligence (AI) Safety Summit at Bletchley Park. (Photo by TOBY MELVILLE/POOL/AFP).

Raimondo’s ambitious vision, unveiled concurrently, delineates the path ahead. By 2030, the US aims to spearhead the design and manufacture of cutting-edge chips, establishing dedicated fabrication plant clusters to realize this audacious objective. She claims that, besides everything else, there has been a significant shift in the need for advanced semiconductor chips due to AI. 

“When we started this, generative AI wasn’t even part of our vocabulary. Now, it’s everywhere. Training a single large language model takes tens of thousands of leading-edge semiconductor chips. The truth is that AI will be the defining technology of our generation. You can’t lead in AI if you don’t lead in making leading-edge chips. And so our work in implementing the CHIPS Act became much more important,” Raimondo emphasized.

If the US achieves its goals, it will result in “hundreds of thousands of good-paying jobs,” Raimondo said Monday. “The truth of it is the US does lead, right? We do lead. We lead in the design of chips and the development of large AI language models. But we don’t manufacture or package any leading-edge chips that we need to fuel AI and our innovation ecosystem, including chips necessary for national defense. We don’t make it in America, and the brutal fact is the US cannot lead the world as a technology and innovation leader on such a shaky foundation,” she iterated.

Why is there a gap between US and chip manufacturing?

The US grappled with a significant gap in chip manufacturing for several reasons. Firstly, many semiconductor companies outsourced their manufacturing operations overseas to cut costs, leading to a decline in domestic chip production capacity. Secondly, as semiconductor technology advanced, the complexity and cost of building cutting-edge fabrication facilities increased, discouraging investment in new fabs. 

Meanwhile, global competitors like Taiwan, South Korea, and China expanded their semiconductor industries rapidly, intensifying competition. While other countries provided substantial government support to their semiconductor industries, the US fell behind. Then, there were regulatory hurdles, and environmental regulations make building and operating semiconductor fabs in the US challenging and costly. 

A combination of outsourcing, technological challenges, global competition, lack of government support, and regulatory issues contributed to the US’s gap in chip manufacturing, with none of the world’s leading-edge chips being produced domestically.

And then the world woke up one morning in dire need of leading-edge chips to underscore the technology behind the next industrial revolution, and America realized its mistake.

“We need to make these chips in America. We need more talent development in America. We need more research and development in America and just a lot more manufacturing at scale,” Raimondo said in her speech at CSIS.

2030 vision: prioritizing future-ready projects

US President Joe Biden greets attendees after delivering remarks on his economic plan at a TSMC chip manufacturing facility in Phoenix, Arizona, on December 6, 2022. (Photo by Brendan SMIALOWSKI/AFP).

US President Joe Biden greets attendees at a TSMC chip manufacturing facility. (Photo by Brendan SMIALOWSKI/AFP).

In Raimondo’s speech, she declared that the US will first prioritize projects that will be operational by the end of this decade. “I want to be clear: there are many worthy proposals that we’ve received with plans to come online after 2030, and we’re saying no, for now, to those projects because we want to maximize our impact in this decade,” she clarified.

In short, the US will give way to “excellent projects that could come online this year” instead of granting incentives to projects that will come online in 10 or 12 years from now. She also referred back to the goal mentioned last year – when the US is all said and done with this CHIPS initiative – is to have at least two new large-scale clusters of leading-edge logic fabs, each of those clusters employing thousands of workers. 

“I’m pleased to tell you today we expect to exceed that target,” she claimed. So far, the Commerce Department has awarded grants to three companies in the chip industry as part of the CHIPS Act: BAE Systems, Microchip Technology, and, most recently, a significant US$1.5 billion grant to GlobalFoundries. Additional funding is anticipated for Taiwan Semiconductor Manufacturing Co. and Samsung Electronics as they establish new facilities within the US.

Raimondo also highlighted her nation’s commitment to supporting the production of older-generation chips, referred to as mature-node or legacy chips. “We’re not losing sight of the importance of current generation and mature node chips, which you all know are essential for cars, medical devices, defense systems, and critical infrastructure.”

Yet, the lion’s share of investments, totaling US$28 billion out of US$39 billion, is earmarked for leading-edge chips. Raimondo emphasized that this program aims for targeted investments rather than scattering funds wisely. She disclosed that the department has received over US$70 billion in requests from leading-edge companies alone.

For now, anticipation is high for the Commerce Department’s new round of grant announcements, scheduled to coincide with President Joe Biden’s State of the Union address on March 7. Among the expected recipients is TSMC, which is establishing new Arizona facilities.

The post US aims for chip supremacy: From zero to 20% by 2030 appeared first on TechHQ.

]]>
Affective computing – how to build better digital products https://techhq.com/2024/02/affective-computing-how-to-build-better-digital-products/ Tue, 27 Feb 2024 15:34:17 +0000 https://techhq.com/?p=232372

“The goal of affective computing is to create a computing system capable of perceiving, recognizing, and understanding human emotions and responding intelligently, sensitively, and naturally,” write researchers in the journal Intelligent Computing. Hitting such a milestone would signal a UX breakthrough with digital applications that were much more instinctive to use, as programs would be... Read more »

The post Affective computing – how to build better digital products appeared first on TechHQ.

]]>

“The goal of affective computing is to create a computing system capable of perceiving, recognizing, and understanding human emotions and responding intelligently, sensitively, and naturally,” write researchers in the journal Intelligent Computing.

Hitting such a milestone would signal a UX breakthrough with digital applications that were much more instinctive to use, as programs would be able to determine when users are struggling – through emotional insights – and make adaptations.

A popular approach has been to use wearables fitted with sensors, but making sense of the data is easier said than done. Developers are aware that the physiological response accompanying the emotional response to an event can be the same for different emotions. For example, fear, anger, and excitement can all send heart rates racing. To be effective, affective computing algorithms need to differentiate between such signals.

The good news is that machines have many inputs to draw upon when trying to determine the emotional state of users. Many of the devices we use daily have cameras and microphones, which pave the way for image, gesture, and speech recognition. So, the idea of applying affective computing principles to our devices is by no means far-fetched.

Emotion as a service

Speaking on the topic of affective computing (what is it and why should I care?), Håkan Silfvernagel – a consultant with expertise in human-machine interfaces and robotics – mentions Microsoft’s Emotion API, which is capable of receiving an image of a human face and returning one of eight emotions, including happiness, sadness, fear, and disgust.

Other tech firms such as Amazon and IBM have developed emotion recognition capabilities as well. And, engagement analytics provided by Affectiva – whose co-founder, Rana el Kaliouby, wrote ‘Girl decoded’ to share her quest to reclaim our humanity by bringing emotional intelligence to technology – are popular with many of the world’s largest advertisers.

Considering device applications, Affectiva has applied its so-called Emotion AI to automotive settings using algorithms fed with in-cabin camera footage to detect the state, emotions, and reactions of drivers and other occupants in the context of vehicle environments.

It’s not a stretch to imagine soothing music being piped out of a car’s speakers to calm the driver during rush hour traffic – a use case that’s been given to highlight the adaptive properties of affective computing systems.

Lately, advances in AI have provided a major boost to the field – a point made by Rosalind Picard, who wrote the book on Affective Computing and has been working in the area for decades. Sentiment analysis of spoken words and text has never been as straightforward to carry out as it is today, and that information can be fed back into education and training systems to gauge the performance of learners.

Likewise, image recognition and object tracking have come on leaps and bounds thanks to AI advances, and those breakthroughs positively impact affective computing systems.

Apple has found itself having to interpret the expressions of wearers of the Vision Pro to generate what the tech giant dubs personas, which are used as avatars that participate in FaceTime calls.

The more emotional signals that VR systems can provide, the richer the experience for headset wearers. And affective computing will be key for tempting more business users to participate in the metaverse.

Advances are being made in brain-computer interfaces, which could give developers another perspective on users’ emotional states. As the cost of wearables incorporating EEG measurements comes down, this information will add to the data fusion used by affective computing devices.

Signals gathered from the brain could spotlight signs of fatigue, joy, or frustration – to give just a few examples. Also, beyond making software more user-friendly, virtual assistants programmed using affective computing techniques have been shown to encourage creative problem-solving, which hints at further wins for business users.

The post Affective computing – how to build better digital products appeared first on TechHQ.

]]>
5G positioning adds to cellular network skillset https://techhq.com/2024/02/5g-positioning-adds-to-cellular-network-skillset/ Mon, 26 Feb 2024 16:00:56 +0000 https://techhq.com/?p=232322

With Mobile World Congress (MWC) 2024 getting underway in Barcelona this week, it’s worth reflecting on one advantage of 5G that often goes under the radar – its positioning prowess. Mobile networks have long had the ability to triangulate user equipment based on signals received from neighboring base stations, but 5G positioning takes this to... Read more »

The post 5G positioning adds to cellular network skillset appeared first on TechHQ.

]]>

With Mobile World Congress (MWC) 2024 getting underway in Barcelona this week, it’s worth reflecting on one advantage of 5G that often goes under the radar – its positioning prowess. Mobile networks have long had the ability to triangulate user equipment based on signals received from neighboring base stations, but 5G positioning takes this to the next level.

What’s more, while ramping up the accuracy of radio-based positioning typically incurs a cost for users – who may need to install additional hardware – 5G breaks this trend by using existing communications technology to deliver high-resolution location information at a lower price point.

Want to know more about 5G positioning? Then MWC 2024 is the place to be.

Thanks to features added to the global wireless standard, 5G networks offer positioning capabilities that can pinpoint connected devices within a 1 m area, and that’s just the beginning. “5G Advanced represents a further development of 5G technology and promises faster data transmission of 20 GBit/s and localization accuracies of 20-30 cm to meet the growing demands of the connected world,” writes the Fraunhofer Institute for Integrated Circuits (Fraunhofer-Institut für Integrierte Schaltungen).

Applications are numerous and will appeal to industrial users in particular. As Ericsson – a provider of indoor 5G positioning systems – points out, smart manufacturing operators can use real-time location information to specify tool settings. “Tightening wheels and putting on car doors requires different torque curves,” explain Magnus Kristersson and Partha Sarathy – product specialists at the global communications technology firm – in a related blog post. “With indoor 5G positioning, we can automate getting the right torque curve to the right tool while disabling tools that are not in a work cell.”

Qualcomm – a developer of chips for mobile devices – has put together a test bed highlighting its industrial precise positioning capabilities using 5G and complementary technologies. In the demonstration, engineers used 5G positioning augmented with machine learning RF fingerprinting to locate machinery under non-line-of-sight conditions.

The setup has six 5G transmission reception points distributed within the facility, which can follow objects of interest with high precision thanks to the data fusion approach.

On TechHQ, we’ve written about how private 5G networks can be a game-changer for businesses. Firms can use private 5G networks to bring connectivity to locations not readily served by public mobile networks – for example, operations in remote areas. But the benefits don’t have to stop there.

If companies are looking for accurate real-time location services on top of data transmission capabilities then it’s possible that 5G networks could perform both duties, saving on the amount of upfront investment required.


Modern wireless standards such as 5G feature positioning reference signals, which can be received passively by user equipment to help pinpoint devices. It’s also possible to measure round trip time using multiple cells to deliver positioning information. And one of the big breakthroughs is the use of angular based methods that report on the arrival of signals across 5G antenna arrays.

Researchers in Sweden have shown how developments have made it possible to perform vehicular positioning with only a single base station, which shows how car-makers could automate navigation when GPS signals are unavailable.

Satellite navigation can become unpredictable when relatively weak GPS signals are blocked in dense urban areas. Mass transit systems such as trains can also be disrupted when satellite positioning fails, as their automatic door-opening systems depend on GPS functionality.

The list of potential use cases for 5G positioning is long and includes use cases indoors and outdoors, from asset tracking to emergency rescue. Plus, solutions can be portable – such as the Fraunhofer IIS Nomadic 5G positioning testbed, which the institute has on display at MWC 2024.

The post 5G positioning adds to cellular network skillset appeared first on TechHQ.

]]>
Oh, Air Canada! Airline pays out after AI accident https://techhq.com/2024/02/air-canada-refund-for-customer-who-used-chatbot/ Wed, 21 Feb 2024 09:30:24 +0000 https://techhq.com/?p=232218

Ruling says Air Canada must refund customer who acted on information provided by chatbot. The airline’s chatbot isn’t available on the website anymore. The case raises the question of autonomous AI action – and who (or what) is responsible for those actions. The AI debate rages on, as debates in tech are wont to do.... Read more »

The post Oh, Air Canada! Airline pays out after AI accident appeared first on TechHQ.

]]>
  • Ruling says Air Canada must refund customer who acted on information provided by chatbot.
  • The airline’s chatbot isn’t available on the website anymore.
  • The case raises the question of autonomous AI action – and who (or what) is responsible for those actions.

The AI debate rages on, as debates in tech are wont to do.

Meanwhile, in other news, an Air Canada chatbot suddenly has total and distinct autonomy.

Although it couldn’t take the stand, when Air Canada was taken to court and asked to pay a refund offered by its chatbot, the company tried to argue that “the chatbot is a separate legal entity that is responsible for its own actions.”

After the death of his grandmother, Jake Moffat visited the Air Canada website to book a flight from Vancouver to Toronto. Unsure of the bereavement rate policy, he opened the handy chatbot and asked it to explain.

Now, even if we take the whole GenAI bot explosion with a grain of salt, some variation of the customer-facing ‘chatbot’ has existed for years. Whether just churning out automated responses and a number to call or responding with the offkey chattiness now ubiquitous with generative AI’s output, the chatbot provides the primary response consumers get from really any company.

And it’s trusted to be equivalent to getting answers from a human employee.

So, when Moffat was told he could claim a refund after booking his tickets, he went ahead and, ceding to encouragement, booked flights right away safe in the knowledge that – within 90 days – he’d be able to claim a partial refund from Air Canada.

He has the screenshot to show that the chatbot’s full response was:

If you need to travel immediately or have already travelled and would like to submit your ticket for a reduced bereavement rate, kindly do so within 90 days of the date your ticket was issued by completing our Ticket Refund Application form.

Which seems about as clear and encouraging as you’d hope to get in such circumstances.

He was surprised then to find that his refund request was denied. Air Canada policy actually states that the airline won’t provide refunds for bereavement travel after the flight has been booked; the information provided by the chatbot was wrong.

Want an Air Canada refund? Talk to the bot...

Via Ars Technica.

Moffat spent months trying to get his refund, showing the airline what the chatbot had said. He was met with the same answer: refunds can’t be requested retroactively. Air Canada’s argument was that because the chatbot response included a link to a page on the site outlining the policy correctly, Moffat should’ve known better.

We’ve underlined the phrase that the chatbot used to link further reading. The way that hyperlinked text is used across the internet – including here on TechHQ – means few actually follow a link through. Particularly in the case of the GenAI answer, it functions as a citation-cum-definition of whatever is underlined.

Still, the chatbot’s hyperlink meant the airline kept refusing to refund Moffat. Its best offer was a promise to update the chatbot and give Moffat a $200 coupon. So he took them to court.

Moffat filed a small claim complaint in Canada’s Civil Resolution Tribunal. Air Canada argued that not only should its chatbot be considered a separate legal entity, but also that Moffat never should have trusted it. Because naturally, customers should of course in no way trust systems put in place by companies to mean what they say.

Christopher Rivers, the Tribunal member who decided the case in favor of Moffat, called Air Canada’s defense “remarkable.”

“Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot,” Rivers wrote. “It does not explain why it believes that is the case” or “why the webpage titled ‘Bereavement travel’ was inherently more trustworthy than its chatbot.”

Rivers also found that Moffat had no reason to believe one part of the site would be accurate and another wouldn’t – Air Canada “does not explain why customers should have to double-check information found in one part of its website on another part of its website,” he wrote.

In the end, he ruled that Moffatt was entitled to a partial refund of $650.88 in Canadian dollars (CAD) (around $482 USD) off the original fare, which was $1,640.36 CAD (around $1,216 USD), as well as additional damages to cover interest on the airfare and Moffatt’s tribunal fees.

Ars Technica heard from Air Canada that it will comply with the ruling and considers the matter closed. Moffat will receive his Air Canada refund.

The AI approach

Last year, CIO of Air Canada Mel Crocker told news outlets that the company had launched the chatbot as an AI “experiment.”

Originally, it was a way to take the load off the airline’s call center when flights were delayed or cancelled. Read: give customers information that would otherwise be available from human employees – which must be presumed to be accurate, or its entire function is redundant.

In the case of a snowstorm, say, “if you have not been issued your new boarding pass yet and you just want to confirm if you have a seat available on another flight, that’s the sort of thing we can easily handle with AI,” Crocker told the Globe and Mail.

Over time, Crocker said, Air Canada hoped the chatbot would “gain the ability to resolve even more complex customer service issues,” with the airline’s ultimate goal being to automate every service that did not require a “human touch.”

Crocker said that where Air Canada could, it would use “technology to solve something that can be automated.”

The company’s investment in AI was so great that, she told the media, the money put towards AI was greater than the cost of continuing to pay human workers to handle simple enquiries.

But the fears that robots will take everyone’s jobs are fearmongering nonsense, obviously.

In this case, liability might have been avoided if the chatbot had given a warning to customers that its information could be inaccurate. That’s not good optics when you’re spending more on it than humans at least marginally less likely to hallucinate refund policies out of thin data.

Because it didn’t include any such warning, Rivers ruled that “Air Canada did not take reasonable care to ensure its chatbot was accurate.” The responsibility lies with Air Canada for any information on its website, regardless of whether it’s from a “strategic page or a chatbot.”

This case opens up the question of AI culpability in the ongoing debate about its efficacy. On the one hand, we have a technology that’s lauded as infallible – or at least on its way to infallibility, and certainly as trustworthy as human beings, with their legendary capacity for “human error.” In fact, it’s frequently sold as a technology that eradicates human error, (and, sometimes, the humans too) from the workplace.

So established is the belief that (generative) artificial intelligence is intelligent, when a GenAI-powered chatbot makes a mistake, the blame lies with it, not the humans who implemented it.

Fears of what AI means for the future are fast being reduced in the public media to the straw man that it will “rise up and kill us” – a line not in any way subdued by calls for AI development to be paused or halted “before something cataclysmic happens.”

The real issue though is the way in which humans are already beginning to regard the technology as an entity separate from the systems in which it exists – and an infallible, final arbiter of what’s right and wrong in such systems. While imagining the State versus ChatGPT is somewhat amusing, passing off corporate error to a supposedly all-intelligent third party seems like a convenient “get out of jail free card” for companies to play – though at least in Canada, the Tribunal system was engaged enough to see this as an absurd concept.

Imagine for a moment that Air Canada had better lawyers, with much greater financial backing, and the scenario of “It wasn’t us, it was our chatbot” becomes altogether more plausible as a defence.

Ultimately, what happened here is that Air Canada refused compensation to a confused and grieving customer. Had a human employee told Moffat he could get a refund after booking his flight, then perhaps Air Canada could refuse – but this is because of the unspoken assumption that said employee would be working from given rules – a set of data upon which they were trained, perhaps – that they’d actively ignored.

In fact, headlines proclaiming that the chatbot ‘lied’ to Moffat are following the established formula for a story in which a disgruntled or foolish employee knowingly gave out incorrect information. The chatbot didn’t ‘know’ what it said was false; had it been given accurate enough training, it would have provided the answer available elsewhere on the Air Canada website.

At the moment, the Air Canada chatbot is not on the website.

Feel free to imagine it locked in a room somewhere, having its algorithms hit with hockey sticks, if you like.

It’s also worth noting that while the ruling was made this year, it was 2022 when Moffat used the chatbot, which is back in the pre-ChatGPT dark ages of AI. While the implications of the case impact the AI industry as it exists here and now, the chatbot’s error in itself isn’t representative, given that it was an early example of AI use.

Still, Air Canada freely assigned it the culpability of a far more advanced intelligence, which speaks to perceptions of GenAI’s high-level abilities. Further, this kind of thing is still happening:

"Howdy doodley doo!" The chipper nature of chatbots often disguises their data or algorithm flaws.

“No takesies backsies.” There’s that chatbot chattiness…

Also, does it bother anyone else that an AI chatbot just hallucinated a more humane policy than the human beings who operated it were prepared to stand by?

The post Oh, Air Canada! Airline pays out after AI accident appeared first on TechHQ.

]]>
Samsung seizes 2nm AI chip deal, challenging TSMC’s reign https://techhq.com/2024/02/samsung-seizes-2nm-ai-chip-deal-challenging-tsmc/ Tue, 20 Feb 2024 09:30:46 +0000 https://techhq.com/?p=232206

The inaugural deal for 2nm chips marks a significant milestone for Samsung, signaling a challenge to TSMC and its dominance. The deal could significantly change the power balance in the industry. Samsung has a strategy to offer lower prices for its 2nm process, reflecting its aggressive approach to attracting customers, particularly eyeing Qualcomm’s flagship chip... Read more »

The post Samsung seizes 2nm AI chip deal, challenging TSMC’s reign appeared first on TechHQ.

]]>
  • The inaugural deal for 2nm chips marks a significant milestone for Samsung, signaling a challenge to TSMC and its dominance.
  • The deal could significantly change the power balance in the industry.
  • Samsung has a strategy to offer lower prices for its 2nm process, reflecting its aggressive approach to attracting customers, particularly eyeing Qualcomm’s flagship chip orders.

In the race for technological supremacy and market dominance, Taiwan Semiconductor Manufacturing Company (TSMC) and Samsung Electronics lead the charge in semiconductor manufacturing. As demand for advanced chips surges in the 5G, AI, and IoT era, competition intensifies, driving innovation. Both companies vie to achieve smaller nanometer nodes, which are pivotal for technological advancement. 

When it comes to semiconductor innovation, TSMC spearheads the charge, with ambitious plans for 3nm and 2nm chips, promising a leap in performance and efficiency. Meanwhile, Samsung, renowned for its memory chip prowess, is mounting a determined challenge to TSMC’s supremacy. Recent reports suggest that Samsung is on the brink of unveiling its 2nm chip technology, marking a significant milestone in its bid to rival TSMC.

In a notable turn of events disclosed during Samsung’s Q4 2023 financial report, the tech world buzzed with news of Samsung’s foundry division securing a prized contract for 2nm AI chips. Amid speculation, Samsung maintained secrecy about the identity of this crucial partner.

But earlier this week, a revelation from Business Korea unveiled that the patron happens to be Japanese AI startup Preferred Networks Inc. (PFN). Since its launch in 2014, PFN has emerged as a powerhouse in AI deep learning, drawing substantial investments from industry giants like Toyota, NTT, and FANUC, a leading Japanese robotics firm.

Samsung vs TSMC

Samsung, headquartered in Suwon, South Korea, is set to unleash its cutting-edge 2nm chip processing technology to craft AI accelerators and other advanced AI chips for PFN, as confirmed by industry insiders on February 16, 2024. 

Should news of this landmark deal be legitimate, it would prove mutually advantageous. It would empower PFN with access to state-of-the-art chip innovations for a competitive edge while propelling Samsung forward in its fierce foundry market rivalry with TSMC, according to insider reports.

Ironically, PFN has had a longstanding partnership with TSMC dating back to 2016, but is opting to shift gears from here on out, going with Samsung’s 2nm node for its upcoming AI chip lineup, according to a knowledgeable insider. PFN also chose Samsung over TSMC due to Samsung’s full-service chip manufacturing capabilities, covering everything from chip design to production and advanced packaging, sources revealed.

Experts also speculate that although TSMC boasts a more extensive clientele for 2nm chips, PFN’s strategic move to Samsung hints at a potential shift in the Korean giant’s favor. This pivotal decision may pave the way for other significant clients to align with Samsung, altering the competitive landscape in the chipmaking realm.

No doubt, in the cutthroat world of contract chipmaking, TSMC reigns supreme, clinching major deals with industry giants like Apple Inc. and Qualcomm Inc. But, as the demand for top-tier chips escalates, the race for technological superiority heats up, with TSMC and Samsung at the forefront of the battle. While TSMC currently leads the pack, boasting 2nm chips for clients like Apple and Nvidia, Samsung is hot on its heels. 

“Apple is set to become TSMC’s inaugural customer for the 2nm process, positioning TSMC at the forefront of competition in the advanced process technology,” TrendForce said in its report. Meanwhile, according to Samsung’s previous roadmap, its 2nm SF2 process is set to debut in 2025. 

The Samsung Foundry Forum (SFF) plan could challenge TSMC.

Samsung’s Foundry Forum (SFF) plan.

“As stated in Samsung’s Foundry Forum (SFF) plan, Samsung will begin mass production of the 2nm process (SF2) in 2025 for mobile applications, expand to high-performance computing (HPC) applications in 2026, and further extend to the automotive sector and the expected 1.4nm process by 2027,” TrendForce noted.

Compared to the second-generation 3GAP process at 3nm, it offers a 25% improvement in power efficiency at the same frequency and complexity and a 12% performance boost at the same power consumption and complexity while reducing chip area by 5%. In short, with TSMC eyeing mass production of 2nm chips by 2025, the competition between these tech titans is set to reach new heights.

Yet, in a strategic maneuver reported by the Financial Times, Samsung is gearing up to entice customers with discounted rates for its 2nm process, a move poised to shake up the semiconductor landscape. With its sights set on Qualcomm’s flagship chip production, Samsung aims to lure clients away from TSMC by offering competitive pricing. 

This bold initiative signals Samsung’s determination to carve out a larger market share and challenge TSMC’s dominance in the semiconductor industry.

The post Samsung seizes 2nm AI chip deal, challenging TSMC’s reign appeared first on TechHQ.

]]>
Death of an innocent? Waymo robotaxi set alight https://techhq.com/2024/02/waymo-robotaxi-attacked-by-crowds-in-san-francisco-autonomous-vehicles/ Thu, 15 Feb 2024 18:39:16 +0000 https://techhq.com/?p=232131

Have you seen the footage of a Waymo robotaxi going up in flames? Waymo robotaxis have been having their moment as the sole autonomous vehicle on the roads in San Francisco. In October of last year, after an incident involving a pedestrian, Cruise – the General Motors owned self-driving offering – was accused of covering... Read more »

The post Death of an innocent? Waymo robotaxi set alight appeared first on TechHQ.

]]>

Have you seen the footage of a Waymo robotaxi going up in flames?

Waymo robotaxis have been having their moment as the sole autonomous vehicle on the roads in San Francisco. In October of last year, after an incident involving a pedestrian, Cruise – the General Motors owned self-driving offering – was accused of covering up information relating to an accident and its permit was removed by the California DMV.

That doesn’t mean Waymo, which is owned by Google, hasn’t had its own safety issues: last June, one of its vehicles killed a dog and more recently a cyclist was struck and injured having been obscured from sensors by another vehicle at an intersection.

All this is to say that the safety of driverless cars is up in the air. For many, this means they shouldn’t be on the roads; San Franciscans have understandably felt used as test subjects in the robotaxi experiment. Driverless vehicles drove more than nine million miles on public roads with test permits last year.

A recent “attack” on a Waymo robotaxi that saw the vehicle’s windows smashed before it was set alight by a firework. The cab had attempted to drive through San Francisco’s Chinatown on the first day of the lunar new year – something that a human driver would know to avoid.

Crowds brought the vehicle to a stop at the intersection between Jackson st. and Grant Avenue, where it was covered in graffiti before violence escalated.

Via Reuters.

But it’s difficult to wholeheartedly call it ‘violence’ or an attack, given that it is just a car – there were no passengers and, obviously, no driver. The Jaguar I-Pace operated by Waymo was the only ‘casualty’.

“The vehicle was not transporting any riders and no injuries have been reported,” Waymo said in a statement. “We are working closely with local safety officials to respond to the situation.”

The company previously described the vandalism as an isolated case, but another Waymo rider told Reuters he rode in a Waymo the next day through another San Francisco crowd that shot fireworks at the car.

“Once they saw the Waymo, they started pointing directly at the Waymo instead of up in the sky,” said Nathan Flurry, who recorded the scene. Flurry describes himself as a Waymo fan.

As well as the incidents involving robotaxis, some are saying that there’s a more generalized fear of AI at play.

“We are seeing people reaching a boiling point over tech that they do not want and does not make their lives better,” said Missy Cummings, director of the George Mason University Autonomy and Robotics center and a former adviser to U.S. traffic safety regulators.

Peskin, the city supervisor, said it may not have been an “anti-tech thing” but rather just criminal mischief by “a bunch of hoodlums.”

True, the streets had been almost completely clear of vehicles as pedestrians grouped to watch the fireworks. The crowd had parted for the occasional car to pass, but the Waymo robotaxi came to a halt in the middle of it all. A fire department spokesperson confirmed the Waymo vehicle partially blocked the road.

Waymo didn’t respond to questions on Monday about why the car drove into the crowded public event. The incident gives rise to another point in the driverless vehicle debate, according to Bryant Walker Smith, a professor at the University of South Carolina with expertise in autonomous-vehicle law: whether self-driving cars can or should be able to detect areas busy with pedestrians and navigate around them.

“I would be interested in hearing from Waymo about how its navigation accounts for these kinds of incidents,” he said.

Aaron Peskin, president of the San Francisco Board of Supervisors, who has called for more regulation of autonomous vehicles, said “most normal car drivers know that they have to avoid Chinatown during the Lunar New Year holidays. The computer doesn’t understand that.”

He further argued the disruption wasn’t an “anti-tech thing” but just criminal mischief by what he described as “a bunch of hoodlums.”

Although we couldn’t possibly comment on the folk in the crowd, we are inclined to agree that, although representative of festering resentments, the incident wasn’t really motivated by anti-AV rhetoric.

The Safe Street Rebels operated a campaign of disruption to driverless vehicles from their introduction to roads in 2022. Having realized a traffic cone on the hood of a driverless car interfered with sensors, halting it until a human employee ‘rescued’ it, the protestors disabled the vehicles across the city.

The Waymo incident has none of the flair or careful thinking that Safe Street Rebels have demonstrated; in their efforts to increase road safety, outright violence is off the cards.

More likely, the crowd was irritated by the interruption of a car cutting through and without a driver to exchange angry gestures with the car itself was the target of their annoyance. Then, of course, the lack of driver makes it spectacularly easy to escalate – hostility, if we can even call it that, isn’t that damaging to a vehicle the way that fireworks are.

California state Senator Dave Cortese, who is proposing legislation to give local governments more power to regulate self-driving cars, said the fact that the car drove into a crowded area during fireworks underscored the technology’s shortcomings.

“What is becoming abundantly clear is that AV technology is not as sophisticated as the industry would like us to believe.”

This after an incident in 2023 that saw a Cruise robotaxi get stuck in wet cement: “It thinks it’s a road and it ain’t, because it ain’t got a brain and it can’t tell that it’s freshly poured concrete,” resident Paul Harvey told SFGate.

The Waymo robotaxi was in the wrong place at the wrong time, poor thing, but while the incident isn’t all that different to a Boris Bike thrown into the Thames, it is a reminder of the social tensions the introduction of AVs stirs up.

Probably pleased not to be in headlines for once, GM’s Cruise has hired a new Chief Safety Officer, naming the lucky holder of the newly created role as Steve Kenner. After October’s accident, founder and CEO Kyle Vogt and COO Dan Kan both resigned, nine executives were chopped and 24% of staff were laid off from Cruise.

Steve Kenner appointed Cruise Chief Safety Officer.

That may have signalled the end, but Kenner’s new role suggests Cruise isn’t done yet. He previously served as VP of safety at self-driving truck company Kodiak and before that was Chief Product and Safety Officer at another driverless truck outfit, Locomation.

“I have worked on safety critical technologies throughout my career, I believe in the safety potential of autonomous vehicles, and I want to safely and responsibly realize that potential,” Kenner said in a statement.

General Motors hopes to relaunch services and regain Cruise’s license to operate soon – potentially the Waymo robotaxi incident will slow proceedings, a warning to the companies testing the safety of AVs in San Francisco.

The post Death of an innocent? Waymo robotaxi set alight appeared first on TechHQ.

]]>
Big bucks for… the big guys? https://techhq.com/2024/02/arm-stock-price-goes-up-thanks-to-ai-demand/ Thu, 15 Feb 2024 18:36:33 +0000 https://techhq.com/?p=232140

ARM stock prices get a huge hike thanks to AI technology demand. The chip maker’s earnings announcement last week caused the stock price to soar.  After returning to the stock market in September last year, UK chip designer ARM Holdings has seen its value almost double in less than a week. The company, based in... Read more »

The post Big bucks for… the big guys? appeared first on TechHQ.

]]>
  • ARM stock prices get a huge hike thanks to AI technology demand.
  • The chip maker’s earnings announcement last week caused the stock price to soar. 

After returning to the stock market in September last year, UK chip designer ARM Holdings has seen its value almost double in less than a week. The company, based in Cambridge, reported financial results last Wednesday, showing that AI-technology demand is boosting its sales.

This isn’t exactly a tale of the little guy making it, given that chips designed by ARM already power almost every smartphone in the world. Since the earnings announcement last week, shares have soared and are now up by more than 98%.

Nvidia, another big name in the chip sector, has actually seen its shares more than triple in value over the last year. Demand for AI chips is responsible, the boom having helped Nvidia become one of the most valuable publicly-traded companies in the world.

Its stock value is a jaw dropping $1.8 trillion, making it the fifth US company to join the “trillion-dollar club” alongside other technology giants Apple, Microsoft, Alphabet and Amazon.

What’s slightly different for ARM is that its technology isn’t used directly in AI work. Instead, other chip makers including Nvidia are choosing to use it for central processing units (CPUs) that work well with AI-specific chips.

Taiwan Semiconductor Manufacturing Company (TSMC) also uses ARM’s chips. Combine these two major customers with the rest of the consumer-focused companies that buy from ARM, and you’ve got huge revenue potential.

What’s more, self-driving technology means demand for ARM-designed chips is growing in the car making industry.

All this is a bit of a redemption arc for the company. ARM was founded in 1990 by chip makers in Cambridge and bought by SoftBank some 25 years later in 2016 for $32bn. Four years later, plans were announced to sell ARM to Nvidia.

Then, come April 2022, the deal was shelved by SoftBank after regulators around the world objected. Instead, it said it would sell shares in ARM on the New York Nasdaq stock exchange.

The rise in share value, then, is good news for SoftBank, proving the wisdom of its decisions – particularly since it’s been hit by losses due to the dropping value of other investments like WeWork, the office space firm.

SoftBank holds a roughly 90% stake in ARM and has seen its own shares grow almost 30% in the last week.

That failed plan to sell to Nvidia is also seeing some recuperation as it disclosed investments in ARM with a stake in the company now that’s worth $147.3m.

So, listen up all you AI naysayers. If there’s one thing that the technology is doing for humanity, it’s making huge sums of money for the corporations that experienced some minor losses a few years ago. And that’s worth something.

The post Big bucks for… the big guys? appeared first on TechHQ.

]]>