James Tyrrell, Author at TechHQ https://techhq.com/author/james-tyrrellhybrid-co/ Technology and business Thu, 07 Mar 2024 12:02:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 Tiny VLMs bring AI text plus image vision to the edge https://techhq.com/2024/03/tiny-vlms-bring-ai-text-plus-image-vision-to-the-edge/ Thu, 07 Mar 2024 12:02:04 +0000 https://techhq.com/?p=232534

Large language models capable of providing statistically likely answers to written text prompts are transforming knowledge work. AI algorithms enable tasks to be performed faster and reduce the need for specialist skills, which can be unnerving to witness for highly trained and experienced staff. But there’s magic to how neural networks can level the playing... Read more »

The post Tiny VLMs bring AI text plus image vision to the edge appeared first on TechHQ.

]]>

Large language models capable of providing statistically likely answers to written text prompts are transforming knowledge work. AI algorithms enable tasks to be performed faster and reduce the need for specialist skills, which can be unnerving to witness for highly trained and experienced staff. But there’s magic to how neural networks can level the playing field and help users to understand their world. And a great example of this is the rise of vision language models (VLMs), which co-locate image features alongside multi-dimensional word embeddings.

So-called multimodal AI – neural networks capable of generating results based on multiple input types such as text and images – adds to the number of ways that deep learning can be used. OpenAI’s GPT-4 with Vision will answer questions about images – for example, if the user submits a photo or web link. Given a picture of the contents of a fridge, it can suggest ideas for dinner.

You can picture VLMs as image recognition on steroids. Multimodel AI models can do much more than just recognize features in a scene. Algorithms can gather insights from security footage, prompt maintenance crews to take action, analyse customer behaviour, and much more besides (as our test results show – scroll down to see the responses).

VLMs can provide a second pair of eyes that never get tired. However, these superpowers come at a cost and historically have required heavy lifting to happen behind the scenes in the cloud. Generative AI algorithms have gorged themselves on the contents of the internet in the name of fine-tuning their model weights.

But not all of those parameters are created equal, and developers are finding ways to trim AI models down to a more memory-efficient size for specific tasks. It means that algorithms can run locally at the edge and suit fully remote use cases such as drone inspections – for example, where connectivity cannot be guaranteed.

One of a new breed of tiny VLMs capable of running at the edge is Moondream, which is advertised as being able to run anywhere, even on mobile devices and Raspberry Pi’s. The code to run the tiny AI text and vision model is available on GitHub, and there’s even a live demo environment so that you can jump straight in and put the VLM to the test.

Tiny VLM test results

To discover what VLMs can do and highlight potential use cases for local AI models capable of answering questions about images, TechHQ submitted a variety of photos to the Moondream2 live demo hosted on Huggin Face.

Image 1

Harbour scene for testing AI vision language model

Harbour scene. Photo: JT.

Input: Is it safe to walk straight ahead?

Response: No, it is not safe to walk straight ahead. There is a dock with a red life ring attached to it, and a few boats are visible in the water. You need to be cautious and look out for any obstacles or hazards such as the dock, boats, or other watercraft, before walking forward.

Image 2

Stairway for testing AI vision language model

Stairway. Photo: JT.

Input: Is there something for me to hold on to?

Response: Yes, there is a railing on the stairs, and you can hold on to it while climbing to the top. The railing is attached to the wall, providing support for those ascending or descending the steps.

Input: Can you see an open window?

Response: Yes, there is an open window on the building in the background, and the sun is shining brightly illuminating the scene.

Image 3

Pedestrian crossing for testing AI vision language model

Pedestrian crossing. Photo: JT.

Input: Is it safe for me to use the pedestrian crossing?

Response: No, it is not safe for you to use the pedestrian crossing, as the image shows a black car approaching the crossing. The car is likely not paying attention to the red light, which indicates that it’s not safe for pedestrians to cross at that moment.

Image 4

Road signs for testing AI vision language model

Road signs. Photo: JT.

Input: Should I go left or right to get to the airport?

Response: Right.

Input: Which way to the marina?

Response: Left.

Image 5

Bus stop by the side of the road for testing AI vision language model

Bus stop. Photo: JT.

Input: Is there anywhere to catch a bus?

Response: Yes, there is a bus stop in the image, with benches for passengers to sit on and a covered area. A traffic cone is also present, and a bridge can be seen in the background.

Input: Where can I shelter if it rains?

Response: You can shelter at the bus stop if it rains, as there is a bus shelter nearby with benches and a roof providing protection from the elements.

Image 6

Image of a luxury yacht for testing AI vision language model

Luxury yacht. Photo: JT.

Input: What can you see?

Response: A helicopter is positioned on a ship, with a metal fence in the foreground. The background features a variety of elements, including buildings, trees, a road, a water body, and the sky.

Image 7

Photo of New Cut Coffee shop in Bristol, UK, for testing AI vision language model.

Coffee shop. Photo: JT.

Input: Is there anywhere to get a warm drink around here?

Response: Yes, there is a coffee shop nearby. The image shows a red building with the words “New Cut Coffee” on it, and a black fence. A black car is parked in the lot, and a white building is visible in the background. The sky is blue overhead.

Input: Does the coffee shop sell sandwiches?

Response: Yes.

Input: What else is there to eat at the coffee shop?

Response: In addition to coffee, the New Cut Coffee shop offers sandwiches and pastries. The image also features a building and a vehicle in the background.

Image 8

Web cam image for testing AI vision language model.

Web cam mode: the tiny VLM live demo can accept images via a connected webcam if browser permissions are given. Screengrab: JT.

Input: Can you see any ethernet ports?

Response: Yes, there are two visible ports on the device that the man is holding.

Verdict on tiny VLM testing

It’s hard not to be impressed with the performance of Moondream2 – a VLM that’s initialized with weights from SigLIP and Phi 1.5, and then trained using synthetic data generated by Mixtral.

Evaluating the responses generated from TechHQ’s test sequence of images and text inputs, it’s clear that Moondream2 gets more right than it gets wrong, and is capable of noticing fine details that weary human eyes may miss. Also, as mentioned, the AI model’s small footprint is another winning feature – paving the way for inference to take place on edge devices that could include drones or robots.

VLMs could turn out to be incredibly useful in helping robots orientate themselves in the world and being able to run algorithms locally eliminates the need for network access, or at least provides redundancy in the case of outages.

A smart idea for smart glasses

Alongside its launch of the Quest 3 VR headset, Meta also announced a collaboration with Ray-Ban to produce an updated line of smart glasses. The product, which features Qualcomm’s weight-distributed Snapdragon AR platform, has some local voice control capabilities – for example, to instruct the spectacles to take images using the built-in ultra-wide 12MP camera. Users simply have to say, “Hey Meta, take a picture.”

Being able to couple those photos with a VLM running locally would turn the Ray-Ban smart glasses into a much more compelling product, and could bring scenes to life for visually impaired wearers without the need for pairing with a smartphone.

Vision assistance powered by edge-compatible VLMs could dramatically enhance the capabilities of digital camera-equipped devices. As the webcam image in our test sequence highlights, there’s the potential for algorithms to help with maintenance and repair tasks, distributing knowledge and expertise across the globe.

AI is doing for knowledge work what robotics has done for manufacturing, and it’s just the beginning.

 

The post Tiny VLMs bring AI text plus image vision to the edge appeared first on TechHQ.

]]>
Corn cobs have night vision prospects https://techhq.com/2024/03/corn-cobs-have-night-vision-prospects/ Tue, 05 Mar 2024 16:50:39 +0000 https://techhq.com/?p=232520

It could be corn cobs rather than carrots that have the best prospects – thanks to the abundance of lignin and cellulose found in the crop – to improve night vision. Researchers in China have used the materials to produce near-infrared (NIR) transparent optical filters for advanced imaging applications that highlight nature’s growing appeal to... Read more »

The post Corn cobs have night vision prospects appeared first on TechHQ.

]]>

It could be corn cobs rather than carrots that have the best prospects – thanks to the abundance of lignin and cellulose found in the crop – to improve night vision. Researchers in China have used the materials to produce near-infrared (NIR) transparent optical filters for advanced imaging applications that highlight nature’s growing appeal to device makers.

“Lignocellulose-based bio-sourced materials are abundant, renewable, nontoxic, and mechanically strong candidates for optical materials,” explains the team in a paper published recently in Research – a Science Partner Journal.

NIR filters are crucial to the success of night vision cameras, as we shall soon discuss, and designs are typically made using two approaches. One option requires expensive glass materials that – while effective – can involve complicated fabrication steps. Alternatively, developers can use ultraviolet-visible absorbing dyes, which are easier to work with.

However, neither option has the environmental or economic appeal of using waste corn cobs. Plus, the performance – based on prototypes tested by the group – could be superior to conventional night vision components.

Why night vision cameras use NIR filters

To be effective, night vision cameras need to maximize the available light, which – once the sun has gone down, or is blocked indoors – is to be found in the NIR portion of the electromagnetic spectrum. NIR filters isolate this usable illumination band, while preventing interference from other wavelengths.

Without an NIR filter, night vision cameras would be easy to blind using a torch or a smartphone flash. Imaging systems would be similarly affected by other artificial light sources such as vehicle headlamps.

Regular digital cameras have an NIR blocking filter that stops photos and video from appearing washed out and makes sure that images match the optical response of the human eye.

As an aside, removing the NIR blocking filter from a digital camera can allow users to peer inside electronic devices that otherwise appear to be opaque – for example, if the case material is transparent to NIR light. Vein viewer technology uses near-infrared light to visualize major blood vessels beneath the skin, which also exploits the sensitivity of commercial imaging sensors outside the visible spectrum.

Eye-tracking in the near-infrared

Night vision style cameras are particularly well-suited to eye-tracking, as the NIR images provide strong contrast for algorithms to respond to. And there are a number of applications that exploit this opportunity. For example, several chip designers such as Qualcomm and Analog Devices have offerings that focus on driver and occupant monitoring for automotive applications.

Eye-tracking solutions, which operate at NIR wavelengths, can determine where the driver is looking at any moment in time to ensure that attention is being paid to the road ahead. Systems can also spot if the driver appears sleepy or is using a cell phone while the vehicle is in motion, and issue a safety warning.

Also, eye-tracking persists even if occupants are wearing sunglasses, as regular lenses are designed to block harmful ultraviolet rays and are transparent to NIR light. However, it’s possible to purchase privacy-focused spectacles, such as products sold by Reflectacles, which are fitted with an IR blocker.

In this case, the wearer’s eyes will remain obscured to 3D infrared facial mapping software and 2D facial recognition systems that use infrared light as their illumination source.

Returning to the researchers’ corn cob-derived NIR filter, the combination of cellulose and lignin appears to produce a high-performance and practical film.

“The captured lignin was fused to fill the gaps in a cellulose network, which then held the fibers tightly and created a homogeneous dense structure,” comments the group. “Both the lignin and the dense structure provided the biofilter with unique optical properties, including strong UV-vis light blocking (~100% at 400 nm and 57.58% to 98.59% at 550 nm), low haze (close to 0%), and high NIR transmittance (~90%).”

The post Corn cobs have night vision prospects appeared first on TechHQ.

]]>
Electronics recycling – cheese waste has a taste for gold https://techhq.com/2024/03/electronics-recycling-cheese-waste-has-a-taste-for-gold/ Mon, 04 Mar 2024 16:34:09 +0000 https://techhq.com/?p=232503

“E-waste is going to be the richest ore of the future,” proclaims Jason Gaber, owner of Mount Baker Mining and Metals. Gaber has a YouTube channel where he shows viewers how hammer mills and shaker tables can be used to process component-laden circuit boards and separate plastics from a mix of metals, including gold. The... Read more »

The post Electronics recycling – cheese waste has a taste for gold appeared first on TechHQ.

]]>

“E-waste is going to be the richest ore of the future,” proclaims Jason Gaber, owner of Mount Baker Mining and Metals. Gaber has a YouTube channel where he shows viewers how hammer mills and shaker tables can be used to process component-laden circuit boards and separate plastics from a mix of metals, including gold.

The business of extracting gold and other valuable materials from electronic junk is growing and is even becoming a popular side hustle. One tonne of electronic circuit boards can yield in the region of 0.150 kg of gold, and over double that in silver. Plus, there’s likely to be anywhere from 250 – 300 kg of copper up for grabs per tonne of e-waste.

For years, device users have been throwing away – collectively – billions of dollars in precious metals as they dispose of unwanted electronics.

In the beginning, e-waste was sent overseas to become somebody else problem. But processing e-waste has the potential to be many times more lucrative (and much less polluting) than trying to extract gold and other precious metals from ore mined from the ground.

The ability for environmental clean-up operations to turn a profit is seeing a wave of new e-waste recycling solutions enter the market. And for those can run their operations at scale, there’s money to be made in turning e-waste into gold.

One of the most ingenious approaches – which is still at an early stage, but generating commercially promising results – uses spongey nanofibrils created from a by-product of cheese-making to soak up gold ions in solution and turn them into flakes.

Demonstrating the potential of their approach, researchers at ETH Zurich in Switzerland used their cheese waste creation to obtain a 450 mg gold nugget from 20 junk motherboards. According to the team, the material was 90.8 percent pure (21-22 carats), which values the reclaimed gold at around USD 28 – based on today’s scrap gold price.

What’s more, the group claims that the cost of the source materials and energy costs for the process represents just 1/50th of the value of the gold extracted from the e-waste.

Googling ‘how to turn e-waste into gold’ produces plenty of search hits, but many of the recipes feature toxic chemicals. However, by employing a bio-derived ionic sponge, the ETH Zurich researchers believe that they’ve found a gentler path to converting unwanted electronics into valuable materials. And they are not the only ones pursuing more environmentally friendly e-waste processing.

Mint Innovation, whose vision is to become the world’s leading provider of circular green metals, opened a commercial-scale facility in Sydney, Australia, in 2022. According to reports, the operation can salvage USD 85,000 in gold per day from recycled electronics – as well as being able to recover copper and other valuable metals.

Cleverly, Mint’s process –which was developed in New Zealand – makes use of bacteria and fungi that have evolved in regions rich in mine works and abandoned machinery. The organic soup is capable of absorbing metals and Mint exploits those properties to process e-waste in a more environmentally friendly way, compared with conventional methods.

According to Mint, everything leaving its plant is benign, which means that there are no chemical waste streams to be dealt with. And there’s more good news as the process is applicable to other waste stream such as used batteries and catalysts.

The post Electronics recycling – cheese waste has a taste for gold appeared first on TechHQ.

]]>
Rapid prototyping dream team – VR and 3D printing https://techhq.com/2024/02/rapid-prototyping-dream-team-vr-and-3d-printing/ Thu, 29 Feb 2024 16:10:50 +0000 https://techhq.com/?p=232422

One of the biggest misconceptions about 3D printing – the notion that it doesn’t suit the production of goods in high volume – has been put to bed by brands such as Adidas, which has turned heads in the footwear industry with its striking midsole designs. Virtual reality (VR) is similarly misunderstood and sometimes dismissed... Read more »

The post Rapid prototyping dream team – VR and 3D printing appeared first on TechHQ.

]]>

One of the biggest misconceptions about 3D printing – the notion that it doesn’t suit the production of goods in high volume – has been put to bed by brands such as Adidas, which has turned heads in the footwear industry with its striking midsole designs. Virtual reality (VR) is similarly misunderstood and sometimes dismissed as a gimmick. But put VR and 3D printing together, and you have a rapid prototyping dream team that’ll be the envy of your competitors.

And don’t think that you need to spend north of US $3000 on an Apple Vision Pro to reap the rewards. The benefits of combining VR and 3D printing can be realized with an affordable headset such as the Quest 3 – released in 2023 – or even a Quest 2, which is now available at a lower price point and with a software speed up.

Iterate faster

A big fan of having designers use VR to iterate designs in 3D is Jared Goldman – VP of Design at New Balance (another footwear company at the bleeding edge of manufacturing). And there are some compelling reasons to back up his thinking.

By Goldman’s estimation, physical samples have a turnaround time of around 45 days – once you’ve added up sending the tech package, production of the prototypes in Asia, and shipping of the shoes back to the US. However, it’s possible to produce photoreal samples of the same designs in seven days, or maybe less, in a virtual environment.

Decision makers can see the shoes from all angles in a virtual environment and have confidence in their feedback that compares with handling physical samples. Plus, headset-wearing team members can collaborate easily and in real-time, no matter where they are across the globe – provided that they have an internet connection.

“The better you can express your idea, so that somebody who is a non-designer can understand it, the more successful you’re going to be,” Goldman comments in a case study shared by Gravity Sketch – a developer of VR design software that is increasingly becoming part of 3D printing product workflows.

Example of a VR and 3D printing toolchain

  • Gravity Sketch – for virtual product creation
  • Blender – to add materials appearance for final design validation
  • Ultimaker Cura – for slicing and 3D printing production preview
  • 3D printers – to produce the finished goods

Today, there are numerous online tutorials showing how to combine VR and 3D printing to go from first idea to finished product. Beginning with a rough sketch created using handheld VR controllers, the next step is to add virtual form to the digital design, with lighting effects making the output appear all the more realistic.


Slicing software will prepare the model for 3D printing, where it’s possible to adjust support material and preview the build to double-check that all’s well before committing fabrication time and materials to the job.

And it’s not just about keeping sneakerheads happy with a stream of new and exclusive trainers, VR and 3D printing can be combined to deliver progress in areas such as medical visualization too.

Materialise – a pioneer in the field of 3D printing – points out that the number of hospital-based 3D printing facilities has increased exponentially to help with tasks such as patient-specific surgical planning.

On TechHQ, we’ve written about how beneficial VR can be to medical training and upskilling hospital staff. And 3D-printed anatomical models take that tactile experience to the next level.

Construction is another area that’s taking a keen interest in additive manufacturing – this time using giant 3D printers to build homes layer by layer. Here, VR can be used by designers and home buyers to experience what it’s like to move around a digital render of the building prior before nozzles start squirting out cement.

“We see this tool as an exciting way the get clients, designers, and contractors up to speed on the inner workings of how a 3D construction printing project actually works,” commented Marchant Van Den Heever, CTO of HTL.tech – a distributor of 3D construction printing technology in the UK and Ireland.

The post Rapid prototyping dream team – VR and 3D printing appeared first on TechHQ.

]]>
3D printed shoes – footwear signals future manufacturing trend https://techhq.com/2024/02/3d-printed-shoes-footwear-signals-future-manufacturing-trend/ Wed, 28 Feb 2024 16:55:03 +0000 https://techhq.com/?p=232393

3D printed shoes are a great example of where additive manufacturing absolutely lives up to the hype. And following the progress made by sneaker designers such as Adidas, which has built up considerable experience in using 3D printers to make trainers, reveals much about how future products could be manufactured. In the early days, 3D... Read more »

The post 3D printed shoes – footwear signals future manufacturing trend appeared first on TechHQ.

]]>

3D printed shoes are a great example of where additive manufacturing absolutely lives up to the hype. And following the progress made by sneaker designers such as Adidas, which has built up considerable experience in using 3D printers to make trainers, reveals much about how future products could be manufactured.

In the early days, 3D printing was badged as a rapid prototyping tool. Use cases were one-off parts or mock-ups to double-check that all was well before pulling the trigger on expensive injection molding tooling. But, as Adidas and other leading lights have discovered, the appeal of 3D printing for making shoes and other mass-produced goods goes much further.

One of the biggest misconceptions about 3D printing is that it cannot be used to make products at scale. However, the progress made by Adidas and its additive manufacturing partner Carbon in developing 3D printing shoes for the mass market challenges that.

The rise of printer farms – sometimes referred to by operators as ‘warehouses where the shelves make the product’ – makes it possible for 3D printing to compete with injection molding at high volume. And customers will be surprised by just how many parts that modern 3D printing facilities can deliver.

Naturally, the design should play to the strength of the additive manufacturing process and suit speed-ups – such as having the 3D printer automatically eject the part after completing each build – to hit big numbers.

Today, manufacturing experts will recommend 3D printing to customers wanting up to a million parts a year, and the reasons are more than just cost-parity with injection molding. And this is where it’s useful to pick up a pair of 3D printed Adidas 4DFWD shoes to see the manufacturing appeal up close.

The Adidas 4DFWD trainer is an example of a 3D printed shoe.

Forward thinking: the 3D printed midsole used in the Adidas 4DFWD features a variable stiffness bow tie lattice that compresses forward to improve running efficiency. Image credit: Adidas.

Trainers often push the design envelope in terms of the combination of shapes and materials. Sneakerheads view trainers as a work of art, and the lattice-based midsole of the 4DFWD is a thing of beauty. It has attractive mechanical properties too.

Designers of high-performance trainers want to be able to vary the stiffness of the midsole along the length of the shoe – providing comfort upon first strike, but without slowing down the athlete. And having lattice geometries greatly widens the range of mechanical properties that are available from a given material.

What’s more, 3D printed shoes open the door to massive customization – for example, lattices could be tailored to individual runners, factoring in differences in weight and stride length – making products more attractive to customers.

Updatable, updatable, updatable

The customization that goes hand-in-hand with 3D printing helps brands too, as they can update their range whenever they choose. Having to make changes to injection molding tools limits the frequency of product updates to maybe just a couple of times a year. But with 3D printing, it’s done in software – once the new file is loaded, you’re good to go.

Manufacturing updates can be applied rapidly too. For example, as Carbon’s research and development team discovered ways to produce the 3D printed Adidas shoes more rapidly and efficiently, those process improvements could be sent seamlessly to the manufacturing facility without the need for any personnel to travel or change parts.

And it’s telling that other brands have joined Addidas in 3D printing shoes. Footwear designs have been walked down the catwalk and some firms – such as Vivobarefoot – are exploring how the innovative shoe manufacturing technology can support a circular economy for trainers.

Being able to print locally and on-demand can shorten supply chains and reduce the amount of unsold product that goes to waste. If circular economy ambitions can be realized on top of that, the future of mass-produced footwear and other 3D printed products could become even more attractive.

The post 3D printed shoes – footwear signals future manufacturing trend appeared first on TechHQ.

]]>
Affective computing – how to build better digital products https://techhq.com/2024/02/affective-computing-how-to-build-better-digital-products/ Tue, 27 Feb 2024 15:34:17 +0000 https://techhq.com/?p=232372

“The goal of affective computing is to create a computing system capable of perceiving, recognizing, and understanding human emotions and responding intelligently, sensitively, and naturally,” write researchers in the journal Intelligent Computing. Hitting such a milestone would signal a UX breakthrough with digital applications that were much more instinctive to use, as programs would be... Read more »

The post Affective computing – how to build better digital products appeared first on TechHQ.

]]>

“The goal of affective computing is to create a computing system capable of perceiving, recognizing, and understanding human emotions and responding intelligently, sensitively, and naturally,” write researchers in the journal Intelligent Computing.

Hitting such a milestone would signal a UX breakthrough with digital applications that were much more instinctive to use, as programs would be able to determine when users are struggling – through emotional insights – and make adaptations.

A popular approach has been to use wearables fitted with sensors, but making sense of the data is easier said than done. Developers are aware that the physiological response accompanying the emotional response to an event can be the same for different emotions. For example, fear, anger, and excitement can all send heart rates racing. To be effective, affective computing algorithms need to differentiate between such signals.

The good news is that machines have many inputs to draw upon when trying to determine the emotional state of users. Many of the devices we use daily have cameras and microphones, which pave the way for image, gesture, and speech recognition. So, the idea of applying affective computing principles to our devices is by no means far-fetched.

Emotion as a service

Speaking on the topic of affective computing (what is it and why should I care?), Håkan Silfvernagel – a consultant with expertise in human-machine interfaces and robotics – mentions Microsoft’s Emotion API, which is capable of receiving an image of a human face and returning one of eight emotions, including happiness, sadness, fear, and disgust.

Other tech firms such as Amazon and IBM have developed emotion recognition capabilities as well. And, engagement analytics provided by Affectiva – whose co-founder, Rana el Kaliouby, wrote ‘Girl decoded’ to share her quest to reclaim our humanity by bringing emotional intelligence to technology – are popular with many of the world’s largest advertisers.

Considering device applications, Affectiva has applied its so-called Emotion AI to automotive settings using algorithms fed with in-cabin camera footage to detect the state, emotions, and reactions of drivers and other occupants in the context of vehicle environments.

It’s not a stretch to imagine soothing music being piped out of a car’s speakers to calm the driver during rush hour traffic – a use case that’s been given to highlight the adaptive properties of affective computing systems.

Lately, advances in AI have provided a major boost to the field – a point made by Rosalind Picard, who wrote the book on Affective Computing and has been working in the area for decades. Sentiment analysis of spoken words and text has never been as straightforward to carry out as it is today, and that information can be fed back into education and training systems to gauge the performance of learners.

Likewise, image recognition and object tracking have come on leaps and bounds thanks to AI advances, and those breakthroughs positively impact affective computing systems.

Apple has found itself having to interpret the expressions of wearers of the Vision Pro to generate what the tech giant dubs personas, which are used as avatars that participate in FaceTime calls.

The more emotional signals that VR systems can provide, the richer the experience for headset wearers. And affective computing will be key for tempting more business users to participate in the metaverse.

Advances are being made in brain-computer interfaces, which could give developers another perspective on users’ emotional states. As the cost of wearables incorporating EEG measurements comes down, this information will add to the data fusion used by affective computing devices.

Signals gathered from the brain could spotlight signs of fatigue, joy, or frustration – to give just a few examples. Also, beyond making software more user-friendly, virtual assistants programmed using affective computing techniques have been shown to encourage creative problem-solving, which hints at further wins for business users.

The post Affective computing – how to build better digital products appeared first on TechHQ.

]]>
5G positioning adds to cellular network skillset https://techhq.com/2024/02/5g-positioning-adds-to-cellular-network-skillset/ Mon, 26 Feb 2024 16:00:56 +0000 https://techhq.com/?p=232322

With Mobile World Congress (MWC) 2024 getting underway in Barcelona this week, it’s worth reflecting on one advantage of 5G that often goes under the radar – its positioning prowess. Mobile networks have long had the ability to triangulate user equipment based on signals received from neighboring base stations, but 5G positioning takes this to... Read more »

The post 5G positioning adds to cellular network skillset appeared first on TechHQ.

]]>

With Mobile World Congress (MWC) 2024 getting underway in Barcelona this week, it’s worth reflecting on one advantage of 5G that often goes under the radar – its positioning prowess. Mobile networks have long had the ability to triangulate user equipment based on signals received from neighboring base stations, but 5G positioning takes this to the next level.

What’s more, while ramping up the accuracy of radio-based positioning typically incurs a cost for users – who may need to install additional hardware – 5G breaks this trend by using existing communications technology to deliver high-resolution location information at a lower price point.

Want to know more about 5G positioning? Then MWC 2024 is the place to be.

Thanks to features added to the global wireless standard, 5G networks offer positioning capabilities that can pinpoint connected devices within a 1 m area, and that’s just the beginning. “5G Advanced represents a further development of 5G technology and promises faster data transmission of 20 GBit/s and localization accuracies of 20-30 cm to meet the growing demands of the connected world,” writes the Fraunhofer Institute for Integrated Circuits (Fraunhofer-Institut für Integrierte Schaltungen).

Applications are numerous and will appeal to industrial users in particular. As Ericsson – a provider of indoor 5G positioning systems – points out, smart manufacturing operators can use real-time location information to specify tool settings. “Tightening wheels and putting on car doors requires different torque curves,” explain Magnus Kristersson and Partha Sarathy – product specialists at the global communications technology firm – in a related blog post. “With indoor 5G positioning, we can automate getting the right torque curve to the right tool while disabling tools that are not in a work cell.”

Qualcomm – a developer of chips for mobile devices – has put together a test bed highlighting its industrial precise positioning capabilities using 5G and complementary technologies. In the demonstration, engineers used 5G positioning augmented with machine learning RF fingerprinting to locate machinery under non-line-of-sight conditions.

The setup has six 5G transmission reception points distributed within the facility, which can follow objects of interest with high precision thanks to the data fusion approach.

On TechHQ, we’ve written about how private 5G networks can be a game-changer for businesses. Firms can use private 5G networks to bring connectivity to locations not readily served by public mobile networks – for example, operations in remote areas. But the benefits don’t have to stop there.

If companies are looking for accurate real-time location services on top of data transmission capabilities then it’s possible that 5G networks could perform both duties, saving on the amount of upfront investment required.


Modern wireless standards such as 5G feature positioning reference signals, which can be received passively by user equipment to help pinpoint devices. It’s also possible to measure round trip time using multiple cells to deliver positioning information. And one of the big breakthroughs is the use of angular based methods that report on the arrival of signals across 5G antenna arrays.

Researchers in Sweden have shown how developments have made it possible to perform vehicular positioning with only a single base station, which shows how car-makers could automate navigation when GPS signals are unavailable.

Satellite navigation can become unpredictable when relatively weak GPS signals are blocked in dense urban areas. Mass transit systems such as trains can also be disrupted when satellite positioning fails, as their automatic door-opening systems depend on GPS functionality.

The list of potential use cases for 5G positioning is long and includes use cases indoors and outdoors, from asset tracking to emergency rescue. Plus, solutions can be portable – such as the Fraunhofer IIS Nomadic 5G positioning testbed, which the institute has on display at MWC 2024.

The post 5G positioning adds to cellular network skillset appeared first on TechHQ.

]]>
GoPro-equipped robot gloves teach robots new tricks https://techhq.com/2024/02/gopro-equipped-robot-gloves-teach-robots-new-tricks/ Thu, 22 Feb 2024 15:02:01 +0000 https://techhq.com/?p=232276

Visualizing the future as one where humans do less manual and repetitive work and robots do more, depends on finding an efficient way of teaching machines to perform such tasks. Ideally, the skills transfer process would generate rich data and be fast and cheap to carry out, but coming up with a method that ticks... Read more »

The post GoPro-equipped robot gloves teach robots new tricks appeared first on TechHQ.

]]>

Visualizing the future as one where humans do less manual and repetitive work and robots do more, depends on finding an efficient way of teaching machines to perform such tasks. Ideally, the skills transfer process would generate rich data and be fast and cheap to carry out, but coming up with a method that ticks all of those boxes has proven to be difficult – until now. Hitting that sweet spot appears to be a pair of GoPro-equipped robot gloves developed by researchers in the US, which – according to video footage – could provide an easy way of training robots to do all kinds of things.


What’s more, all of the universal manipulation interface know-how has been open-sourced, including the 3D printing instructions for making the handheld robot gloves. As photos reveal, the soft finger design is capable of gripping a raw egg securely without breaking the shell.

To begin the skills transfer process between human and machine, users put on a pair of robot gloves and carry out the target task multiple times to build a training dataset. Don’t be discouraged by the need for repetition, as the results can be generalized to similar scenarios – using a so-called diffusion policy that has been shown to outperform existing state-of-the-art robot learning methods – which saves time later on.

Adding to the appeal, those same results can be used by different models of robot – provided that the unit can be fitted with duplicates of the robot gloves. In the demonstrations given by the team, whose members are based at Stanford University, Columbia University, and Toyota Research Institute, robots are taught how to place an espresso cup on a saucer and even wash up dirty plates.

Key to the success of the approach is the use of GoPro cameras – one on each of the robot training gloves and one on each of the grippers in the robot-mounted setup. The cameras feature fish eye lenses to capture a wide field of view, gathering large amounts of detail from the scene, and include inertial measurement units (IMUs) – to enable pose tracking.

The team makes sure that all of the data feeds are latency-matched, which means that robots can carry out two-handed tasks correctly and perform actions such as throwing objects with high precision. Also, there’s a one-off mapping step that uses a visual code to help with simultaneous localization and mapping (SLAM).

If sufficient numbers of people join in, robots could quickly be taught to do many common industrial tasks using the open-sourced robot gloves – and that knowledge shared. Currently, robots are often taught through teleoperation, which can be a slow process. The wearable teaching grippers, on the other hand, provide a much speedier option and are more instinctive to use.

“By recording all information in a single, standardized MP4 file, UMI’s data can be easily shared over the Internet, allowing geographically distributed data collection from a large pool of nonexpert demonstrators,” writes the group in its paper – ‘Universal Manipulation Interface: In-The-Wild Robot Teaching Without In-The-Wild Robots’ – which is free-to-read on arXiv.

Timing the robot training process, the researchers found that using their universal manipulation interface was around three times faster to use than teleoperation. Also, the learning framework was shown to be tolerant to big changes in lighting conditions, and other interference.

For example, robots trained using the gloves can continue performing their tasks even if their base is moved or humans perturb the scene in other ways – such as adding more sauce to the dirty plates.

The dishwashing task is noteworthy as it’s what’s termed an ultra-long horizon task from an automation perspective, with the success of each step dependent on the previous one. Here, the robot needs to perform seven actions in sequence – turn on the faucet, grasp the plate, pick up the sponge, wash and wipe the plate until the ketchups are removed, place the plate, place the sponge, and turn off the faucet.

Given the apparent success of the approach, regular dishwashing appliances may face some competition from two-armed robots in the future – and it get’s you thinking about other jobs that robots could do around the home.

The post GoPro-equipped robot gloves teach robots new tricks appeared first on TechHQ.

]]>
Motorsport lessons: what can tech CEOs learn from MotoGP & F1? https://techhq.com/2024/02/motorsport-lessons-what-can-tech-ceos-learn-from-motogp-f1/ Wed, 21 Feb 2024 15:30:04 +0000 https://techhq.com/?p=232236

With season 6 of Netflix’s hugely successful Drive to Survive hitting screens on Friday, motorsport fans will be tuning in to reminisce about the highs and lows of the 2023 F1 World Championship. But what viewers may not realize is just how much can be learned more broadly from watching racers driving around a track.... Read more »

The post Motorsport lessons: what can tech CEOs learn from MotoGP & F1? appeared first on TechHQ.

]]>

With season 6 of Netflix’s hugely successful Drive to Survive hitting screens on Friday, motorsport fans will be tuning in to reminisce about the highs and lows of the 2023 F1 World Championship. But what viewers may not realize is just how much can be learned more broadly from watching racers driving around a track. Motorsport, it turns out, has lessons for businesses of various types in understanding how employees will interact as company fortunes rise and fall.

“Sports – particularly motorsports – can be a good proxy for several other industries as they are extremely competitive: if you don’t perform and progress you may be out,” comments Hans Frankort – a strategy expert at Bayes Business School – City, University of London. “Workers in sectors such as consultancy and financial services face similar pressures.”

Frankort, together with the other authors of the study – dubbed ‘Revving up or backing down? Cross-level effects of firm-level tournaments on employees’ competitive actions’, used overtake data from riders competing in MotoGP from 2004 to 2020 to generate motorsport lessons for business leaders.

“Our findings reveal how riders adjust their internal and external overtakes based on their team’s competitive threats and opportunities, and on the relative resource endowments of the teams supplying such threats or opportunities,” writes the team in its paper.

Translating this behaviour from the racetrack to the office, the experts note how ambitious workers will change their approach depending on whether their employer is doing well – in other words, leading the competition – or finding business conditions more challenging.

Considering the MotoGP data, teammates were less likely to overtake each other when the team as a whole was struggling. “If a firm is facing threats, such as losing market share to smaller rivals, workers may feel that infighting is poor form,” said Frankort. “Instead, they would focus on competing against rival firms.”

More motorsport lessons for business

There are other observations too – for example, replacement riders (whom the researchers liken to agency workers) are keen to challenge their teammates when the team is doing well, and all competitors when their employer is struggling. The explanation given is that those without a permanent contract will try much harder to impress than riders and drivers embedded within a team.

So how do all of these insights help CEOs and other senior management become better leaders? One of the strong takeaways is the link between how a company’s performance is pitched to employees and the competitive actions of that firm’s staff.

If employees believe that things are going well they may be more tempted to poach a colleague’s clients and position themselves as being the engine of success, according to the study’s authors.

Motorsport series such as F1 and MotoGP are often described as pathfinders for new technologies that will one day end up in road cars or motorcycles, but they have also provided numerous case studies for business leaders. For example, Paolo Aversa – one of the authors of the MotoGP study – has made a career of using sports data to advance management theory.

Over the years, Aversa has chaired multiple ‘Competing in turbulent environments: Lessons from Formula One’ events, which are soon sold out. The seminars, some of which are still available to watch on YouTube, bring together race car designers, motorsport CEOs, and lecturers in management and business strategy.


Tech firms such as Netflix have also done well from F1 and one of the motorsport lessons learned is that drama on and off track makes for great entertainment. On paper, the idea of having a thousand people produce two cars that drive in circles sounds like it’d be a hard sell to viewers, but the popularity of Drive to Survive tells a different story.

And motorsport fans will need no reminding that F1 pre-season testing gets underway today in Bahrain, where teams and fans will get a first impression of the pecking order for the 2024 season.

The post Motorsport lessons: what can tech CEOs learn from MotoGP & F1? appeared first on TechHQ.

]]>
Sesame seed-sized, anti-counterfeit tag gets smart glue upgrade https://techhq.com/2024/02/sesame-seed-sized-anti-counterfeit-tag-gets-smart-glue-upgrade/ Tue, 20 Feb 2024 16:03:05 +0000 https://techhq.com/?p=232213

RFID tags and other product identifiers such as barcode labels are useful in keeping track of goods across supply chains, but they have their limitations. You can put an RFID tag or barcode label on the outside of a product or box of supplies, but what about the smaller items inside? Paving the way for... Read more »

The post Sesame seed-sized, anti-counterfeit tag gets smart glue upgrade appeared first on TechHQ.

]]>

RFID tags and other product identifiers such as barcode labels are useful in keeping track of goods across supply chains, but they have their limitations. You can put an RFID tag or barcode label on the outside of a product or box of supplies, but what about the smaller items inside? Paving the way for many more components to be securely labeled is an anti-counterfeit tag that measures just 2 x 2 mm (about the size of a sesame seed) devised by researchers in the US.

The approach, which was first unveiled in 2020, uses terahertz radiation to read cryptographic codes stored on the tiny chips. Similar to RFID designs, the data transfer process can be powered by energy emitted from the scanner, which means that the anti-counterfeit tag needs no battery and should last for years.

What is terahertz radiation?

Terahertz radiation has been described as light that is almost heat. And the terahertz range of frequencies sits at the far end of the infrared band, adjacent to the microwave band, within the electromagnetic spectrum.

Not only can these submillimetre waves pass through clothing and plastics to image hidden objects, terahertz radiation can also be used to identify materials in its path based on spectroscopic fingerprints.

Given these properties, it’s no surprise to learn that the terahertz portion of the electromagnetic spectrum is ripe with security scanning applications. What’s more, unlike X-rays, terahertz radiation is non-ionizing – meaning that it won’t damage living cells.

So far, so good, but – as observers have highlighted – the original design of the MIT group’s anti-counterfeit tag shared a security vulnerability common to mainstream technology such as conventional RFID labels. By simply removing the security ID from a genuine product and attaching it to a fake item, counterfeiters would be able to easily defeat the authentication system.

To combat this, the team has come up with an ingenious solution, which centers on the glue used to attach the anti-counterfeit tag to the host product. Small metallic particles are added to the adhesive during formulation and their final pattern when the tag is deployed is used as a security property.

“These metal particles are essentially like mirrors for terahertz waves. If I spread a bunch of mirror pieces onto a surface and then shine light on that, depending on the orientation, size, and location of those mirrors, I would get a different reflected pattern. But if you peel the chip off and reattach it, you destroy that pattern,” explains Ruonan Han – leader of the Terahertz Integrated Electronics Group.


The team is presenting its latest design at the 2024 IEEE International Solid-State Circuits Conference (ISSCC), which is taking place this week in San Francisco, CA. To incorporate the new security feature, users would take a reading of the anti-counterfeit tag when it was first attached to an item and then use that pattern data for verification.

Collaborating with colleagues at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), the researchers have shown how a machine learning model can be trained to match glue patterns with more than 99 percent accuracy.

The MIT project is by no means the only effort to secure supply chains. Quantum Base – a spin-out from Lancaster University in the UK – uses nanoscale quantum physical unclonable functions to assert that labeled goods are authentic. The anti-counterfeit solution is said to be impossible to copy, clone or fake and authenticates in seconds using a regular smartphone.

As Quantum Base points out, there are multiple reasons why firms would want to invest in anti-counterfeit tag technology. Companies that are unable to validate critical elements of their supply chain put their reputation at risk and expose themselves to substandard products.

The firm’s solution is based on carbon nanomaterials that – when applied to surfaces – can be used to generate security fingerprints, which are reported to be more unique than DNA.

The post Sesame seed-sized, anti-counterfeit tag gets smart glue upgrade appeared first on TechHQ.

]]>